source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/N%20conjecture
In number theory the n conjecture is a conjecture stated by as a generalization of the abc conjecture to more than three integers. Formulations Given , let satisfy three conditions: (i) (ii) (iii) no proper subsum of equals First formulation The n conjecture states that for every , there is a constant , depending on and , such that: where denotes the radical of the integer , defined as the product of the distinct prime factors of . Second formulation Define the quality of as The n conjecture states that . Stronger form proposed a stronger variant of the n conjecture, where setwise coprimeness of is replaced by pairwise coprimeness of . There are two different formulations of this strong n conjecture. Given , let satisfy three conditions: (i) are pairwise coprime (ii) (iii) no proper subsum of equals First formulation The strong n conjecture states that for every , there is a constant , depending on and , such that: Second formulation Define the quality of as The strong n conjecture states that .
https://en.wikipedia.org/wiki/Sagol
Sagol (), or beef leg bone, is an ingredient in Korean cuisine. Sagol is often boiled to make a broth, called sagol-yuksu (), or beef leg bone broth, for Korean soups such as gomguk (beef bone soup), galbi-tang (short rib soup), tteokguk (sliced rice cake soup), kal-guksu (noodle soup), or gukbap (soup with rice). Sagol is rich in the protein collagen and in minerals such as calcium. In traditional Korean culture, it is believed to reinvigorate the body. However, no scientific evidence supports this claim. In the summer, sagol-yuksu (broth) is served to pregnant or breastfeeding mothers and the sick. In the winter, it is served with rice as a warm and nutritious meal. Etymology The anglicized translation of the word Sagol roughly translates to Four Bones. 'Sa' meaning four and 'Gol' meaning bone. Together, they refer to the thigh and shin bones of a cow or bull. The term is primarily used in cooking. Anatomy Cattle have eight sagol bones. Sagol uses the thigh and shin bones from a cow's four legs. Sagol can be classified by breed (hanu, beef cattle, dairy cattle, imported, etc.), sex (cow, bull, steer, etc.), or grade. High-grade sagol from hanu beef with a dense ivory and white bone cross-section is typically preferred. A sagol consists of one diaphysis part and two epiphysis parts. The Epiphysis parts have an outer layer of compact bone and an inner layer of spongy bone. The diaphysis contains periostea outside and marrow inside. See also Beef shank Ham hock Long bone Oxtail
https://en.wikipedia.org/wiki/Lyman-alpha%20blob
In astronomy, a Lyman-alpha blob (LAB) is a huge concentration of a gas emitting the Lyman-alpha emission line. LABs are some of the largest known individual objects in the Universe. Some of these gaseous structures are more than 400,000 light years across. So far they have only been found in the high-redshift universe because of the ultraviolet nature of the Lyman-alpha emission line. Since Earth's atmosphere is very effective at filtering out UV photons, the Lyman-alpha photons must be redshifted in order to be transmitted through the atmosphere. The most famous Lyman-alpha blobs were discovered in 2000 by Steidel et al. Matsuda et al., using the Subaru Telescope of the National Astronomical Observatory of Japan extended the search for LABs and found over 30 new LABs in the original field of Steidel et al., although they were all smaller than the originals. These LABs form a structure which is more than 200 million light-years in extent. It is currently unknown whether LABs trace overdensities of galaxies in the high-redshift universe (as high redshift radio galaxies—which also have extended Lyman-alpha halos—do, for example), nor which mechanism produces the Lyman-alpha emission line, or how the LABs are connected to the surrounding galaxies. Lyman-alpha blobs may hold valuable clues to determine how galaxies are formed. The most massive Lyman-alpha blobs have been discovered by Tristan Friedrich et al. (2021), Steidel et al. (2000), Francis et al. (2001), Matsuda et al. (2004), Dey et al. (2005), Nilsson et al. (2006), and Smith & Jarvis et al. (2007). Examples Himiko LAB-1 EQ J221734.0+001701, the SSA22 Protocluster Ton 618, hyperluminous quasar powering a Lyman-alpha blob; also possesses one of the most massive black holes known. See also Damped Lyman-alpha system Galaxy filament Green bean galaxy Lyman-alpha forest Lyman-alpha emitter Lyman break galaxy Newfound Blob (disambiguation) Notes Astronomical spectroscopy Intergalactic media Lar
https://en.wikipedia.org/wiki/Preboot%20Execution%20Environment
In computing, the Preboot eXecution Environment, PXE (most often pronounced as pixie, often called PXE Boot/pixie boot.) specification describes a standardized client–server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP. The concept behind the PXE originated in the early days of protocols like BOOTP/DHCP/TFTP, and it forms part of the Unified Extensible Firmware Interface (UEFI) standard. In modern data centers, PXE is the most frequent choice for operating system booting, installation and deployment. Overview Since the beginning of computer networks, there has been a persistent need for client systems which can boot appropriate software images, with appropriate configuration parameters, both retrieved at boot time from one or more network servers. This goal requires a client to use a set of pre-boot services, based on industry standard network protocols. Additionally, the Network Bootstrap Program (NBP) which is initially downloaded and run must be built using a client firmware layer (at the device to be bootstrapped via PXE) providing a hardware independent standardized way to interact with the surrounding network booting environment. In this case the availability and subjection to standards are a key factor required to guarantee the network boot process system interoperability. One of the first attempts in this regard was bootstrap loading using TFTP standard RFC 906, published in 1984, which established the 1981 published Trivial File Transfer Protocol (TFTP) standard RFC 783 to be used as the standard file transfer protocol for bootstrap loading. It was followed shortly after by the Bootstrap Protocol standard RFC 951 (BOOTP), published in 1985, which allowed a disk-less client machine to discover its own IP address, the address of a TFTP se
https://en.wikipedia.org/wiki/European%20Programme%20for%20Critical%20Infrastructure%20Protection
The European Programme for Critical Infrastructure Protection (EPCIP) is the doctrine and programmes created to identify and protect critical infrastructure that, in case of fault, incident or attack, could seriously impact both the country where it is hosted and at least one other European Member State. History The EPCIP came about as a result of a consultation in 2004 by the European Council, seeking a programme to protect critical infrastructure through its 'Communication on Critical Infrastructure Protection in the Fight against Terrorism'. In December 2004 it endorsed the intention of the European Commission to propose a European Programme for Critical Infrastructure Protection (EPCIP) and agreed to the creation of a European Critical Infrastructure Warning Information Network (CIWIN). In December the European Commission issued its finalised design as a directive EU COM(2006) 786; this obliged all member states to adopt the components of the EPCIP into their national statutes. Not only did it apply to main area of the European Union but also to the wider European Economic Area. EPCIP also identified National Critical Infrastructure (NCI) where its disruption would only affect a single Member State. It set the responsibility for protecting items of NCI on its owner/operators and on the relevant Member State, and encouraged each Member State to establish its own National CIP programme.
https://en.wikipedia.org/wiki/Comparison%20of%20early%20word%20processors
This article compares early word processing software. Operating system compatibility This table gives a comparison of what operating systems are compatible with each word processor in 1985.
https://en.wikipedia.org/wiki/Sharadchandra%20Shankar%20Shrikhande
Sharadchandra Shankar Shrikhande (19 October 1917 – 21 April 2020) was an Indian mathematician with notable achievements in combinatorial mathematics. He was notable for his breakthrough work along with R. C. Bose and E. T. Parker in their disproof of the famous conjecture made by Leonhard Euler dated 1782 that there do not exist two mutually orthogonal latin squares of order 4n + 2 for any n. Shrikhande's specialty was combinatorics, and statistical designs. Shrikhande graph is used in statistical designs. Life, education and career He was the fifth of ten siblings. His father worked at a flour mill. He completed his B.Sc. from Government Science College, Nagpur and went for further studies at the Indian Statistical Institute. He then briefly worked as a lecturer at the Government Science College, Nagpur. Shrikhande received a Ph.D. in the year 1950 from the University of North Carolina at Chapel Hill under the supervision of Raj Chandra Bose. Shrikhande taught at various universities in the USA and in India. Shrikhande was a professor of mathematics at Banaras Hindu University, Banaras, and the founding head of the department of mathematics, University of Mumbai and the founding director of the Center of Advanced Study in Mathematics, Mumbai until he retired in 1978. He was a fellow of the Indian National Science Academy, the Indian Academy of Sciences and the Institute of Mathematical Statistics, USA. In 1988, his wife Shakuntala passed away and he moved to the United States. Shrikhande returned to India in 2009. He turned 100 in October 2017 and died in April 2020 at the age of 102. His son Mohan Shrikhande is a professor of combinatorial mathematics at Central Michigan University in Mt. Pleasant, Michigan.
https://en.wikipedia.org/wiki/Policy%20and%20charging%20rules%20function
Policy and Charging Rules Function (PCRF) is the software node designated in real-time to determine policy rules in a multimedia network. As a policy tool, the PCRF plays a central role in next-generation networks. Unlike earlier policy engines that were added onto an existing network to enforce policy, the PCRF is a software component that operates at the network core and accesses subscriber databases and other specialized functions, such as a charging system, in a centralized manner. Because it operates in real time, the PCRF has an increased strategic significance and broader potential role than traditional policy engines. This has led to a proliferation of PCRF products since 2008. The PCRF is the part of the network architecture that aggregates information to and from the network, operational support systems, and other sources (such as portals) in real time, supporting the creation of rules and then automatically making policy decisions for each subscriber active on the network. Such a network might offer multiple services, quality of service (QoS) levels, and charging rules. PCRF can provide a network agnostic solution (wire line and wireless) and can also enable multi-dimensional approach which helps in creating a lucrative and innovative platform for operators. PCRF can also be integrated with different platforms like billing, rating, charging, and subscriber database or can also be deployed as a standalone entity. PCRF plays a key role in VoLTE as a mediator of network resources for the IP Multimedia Systems network for establishing the calls and allocating the requested bandwidth to the call bearer with configured attributes. This enables an operator to offer differentiated voice services to their user(s) by charging a premium. Operators also have an opportunity to use PCRF for prioritizing the calls to emergency numbers in the next-gen networks.
https://en.wikipedia.org/wiki/Friedrich%20Loeffler%20Institute
The Friedrich Loeffler Institute (FLI), is the Federal Institute for Animal Health of Germany, that country's leading animal disease center. The institute was founded in 1910 and named for its founder Friedrich Loeffler in 1952. The FLI is situated on the Isle of Riems, which belongs to the City of Greifswald. Riems is a very small island that can be reached via a dam, which can be closed off in case of an outbreak. Due to these circumstances, Riems posed the perfect location for one of the most modern animal health research facilities in the world. The Friedrich Loeffler Institute is directly subordinated to the German Ministry of Food, Agriculture and Consumer Protection. Its main subject is the thorough study of livestock health and other closely related subjects including molecular biology, virus diagnostics, immunology, and epidemiology. Federal laws of Germany hold the FLI responsible for national and international animal disease control; it also poses the international reference lab for several viral diseases. The institute publishes its research, and cooperates with other national and international institutions and researchers. Among the animal diseases under research are for instance foot and mouth disease, mad cow disease, and avian influenza. Currently, 330 people work for the FLI, and an additional 140 will be employed upon completion of the construction work. 260 Million Euros are spent by the Federal Government to build new laboratories and barns. As part of this extension, in 2010 the Riems Institute completed Biosafety level 4 laboratory facilities, which enable research activities on the most dangerous of viruses—one of four such facilities in Germany. Organisation The institution is managed by President Prof. Dr. Dr. h. c. Thomas C. Mettenleiter, who is also the Head of the Institute of molecular virology and cell biology (IMVZ) and teaches at the nearby University of Greifswald, and vice-President Prof. Dr. Franz J. Conraths, head of the Inst
https://en.wikipedia.org/wiki/Spherical%20law%20of%20cosines
In spherical trigonometry, the law of cosines (also called the cosine rule for sides) is a theorem relating the sides and angles of spherical triangles, analogous to the ordinary law of cosines from plane trigonometry. Given a unit sphere, a "spherical triangle" on the surface of the sphere is defined by the great circles connecting three points , and on the sphere (shown at right). If the lengths of these three sides are (from to (from to ), and (from to ), and the angle of the corner opposite is , then the (first) spherical law of cosines states: Since this is a unit sphere, the lengths , and are simply equal to the angles (in radians) subtended by those sides from the center of the sphere. (For a non-unit sphere, the lengths are the subtended angles times the radius, and the formula still holds if and are reinterpreted as the subtended angles). As a special case, for , then , and one obtains the spherical analogue of the Pythagorean theorem: If the law of cosines is used to solve for , the necessity of inverting the cosine magnifies rounding errors when is small. In this case, the alternative formulation of the law of haversines is preferable. A variation on the law of cosines, the second spherical law of cosines, (also called the cosine rule for angles) states: where and are the angles of the corners opposite to sides and , respectively. It can be obtained from consideration of a spherical triangle dual to the given one. Proofs First proof Let , and denote the unit vectors from the center of the sphere to those corners of the triangle. The angles and distances do not change if the coordinate system is rotated, so we can rotate the coordinate system so that is at the north pole and is somewhere on the prime meridian (longitude of 0). With this rotation, the spherical coordinates for are where is the angle measured from the north pole not from the equator, and the spherical coordinates for are The Cartesian coordinates for ar
https://en.wikipedia.org/wiki/Seabird%20Colony%20Register
The Seabird Colony Register (SCR) is a database, managed by the British Joint Nature Conservation Committee, which contains counts of breeding seabirds at British seabird colonies made between 1969 and 1998, which is used for analysing past changes in breeding seabird numbers and changes in their colony size in Britain and Ireland. Data included in the SCR include results of two complete seabird censuses of Britain and Ireland: Operation Seafarer (1969/70) and the Seabird Colony Register Census (1985–1987), as well as ad hoc counts and counts from other surveys. Data are held for all 25 species of seabird breeding throughout Britain and Ireland. The SCR has been partially superseded by the Seabird 2000 database.
https://en.wikipedia.org/wiki/Garra%20yiliangensis
Garra yiliangensis is a species of ray-finned fish in the genus Garra from Yunnan. The species is known only from the type specimen which was collected in the 1960s from a hill stream in Yunnan and was formally described in 1977.
https://en.wikipedia.org/wiki/Nine%20men%27s%20morris
Nine men's morris is a strategy board game for two players dating at least to the Roman Empire. The game is also known as nine-man morris, mill, mills, the mill game, merels, merrills, merelles, marelles, morelles, and ninepenny marl in English. In North America, the game has also been called cowboy checkers, and its board is sometimes printed on the back of checkerboards. Nine men's morris is a solved game, that is, a game whose optimal strategy has been calculated. It has been shown that with perfect play from both players, the game results in a draw. The Latin word means 'gamepiece', which may have been corrupted in English to 'morris', while miles is Latin for soldier. Three main alternative variations of the game are three, six, and twelve men's morris. Rules The board consists of a grid with twenty-four intersections, or points. Each player has nine pieces, or men, usually coloured black and white. Players try to form 'mills'—three of their own men lined horizontally or vertically—allowing a player to remove an opponent's man from the game. A player wins by reducing the opponent to two men (whereupon they can no longer form mills and thus are unable to win) or by leaving them without a legal move. The game proceeds in three phases: Placing men on vacant points Moving men to adjacent points (optional phase) Moving men to any vacant point when the player has been reduced to three men Phase 1: Placing pieces The game begins with an empty board. The players determine who plays first and then take turns. During the first phase, a player's turn consists of placing a man from their hand onto an empty point. If a player is able to place three of their pieces on contiguous points in a straight line, vertically or horizontally, they have formed a mill, which allows them to remove one of their opponent's pieces from the board. A piece in an opponent's mill, however, can be removed only if no other pieces are available. After all men have been placed, phase two
https://en.wikipedia.org/wiki/Nascent%20state%20%28chemistry%29
Nascent state or in statu nascendi (Lat. newly formed moiety: in the state of being born or just emerging), is an obsolete theory in chemistry. It refers to the form of a chemical element (or sometimes compound) in the instance of their liberation or formation. Often encountered are atomic oxygen (Onasc), nascent hydrogen (Hnasc), and similar forms of chlorine (Clnasc) or bromine (Brnasc). The concept of a "nascent state" was developed to explain the observation that gases generated in situ are frequently more reactive than identical chemicals that have been stored for an extended period of time. First usage of the term was in work by Joseph Priestley around 1790. Auguste Laurent expanded on the theory in the mid 19th century. Constantine Zenghelis hypothesized in 1920 that the increased reactivity of the "nascent" state was due to the fine dispersion of the molecules, not their status as free atoms. Still popular in the early 20th century, the nascent state theory was recognized as declining by 1942. A 1990 review noted that the term was still found as a passing mention in contemporary textbooks. The review summarized that the increased activity observed is actually caused by multiple kinetic effects, and that grouping all these effects into a single term could cause chemists to view the effect too simplistically. See also Monatomic gas
https://en.wikipedia.org/wiki/CD33
CD33 or Siglec-3 (sialic acid binding Ig-like lectin 3, SIGLEC3, SIGLEC-3, gp67, p67) is a transmembrane receptor expressed on cells of myeloid lineage. It is usually considered myeloid-specific, but it can also be found on some lymphoid cells. It binds sialic acids, therefore is a member of the SIGLEC family of lectins. Structure The extracellular portion of this receptor contains two immunoglobulin domains (one IgV and one IgC2 domain), placing CD33 within the immunoglobulin superfamily. The intracellular portion of CD33 contains immunoreceptor tyrosine-based inhibitory motifs (ITIMs) that are implicated in inhibition of cellular activity. Function CD33 can be stimulated by any molecule with sialic acid residues such as glycoproteins or glycolipids. Upon binding, the immunoreceptor tyrosine-based inhibition motif (ITIM) of CD33, present on the cytosolic portion of the protein, is phosphorylated and acts as a docking site for Src homology 2 (SH2) domain-containing proteins like SHP phosphatases. This results in a cascade that inhibits phagocytosis in the cell. Alzheimer's disease CD33 controls microglial activation but in Alzheimer disease it goes overdrive in presence of amyloid and tau proteins, its expression is known to be tied to TREM2. Clinical significance CD33 is the target of gemtuzumab ozogamicin (trade name: Mylotarg®; Pfizer/Wyeth-Ayerst Laboratories), an antibody-drug conjugate (ADC) for the treatment of patients with acute myeloid leukemia. The drug is a recombinant, humanized anti-CD33 monoclonal antibody (IgG4 κ antibody hP67.6) covalently attached to the cytotoxic antitumor antibiotic calicheamicin (N-acetyl-γ-calicheamicin) via a bifunctional linker (4-(4-acetylphenoxy)butanoic acid). Several mechanisms of resistance to gemtuzumab ozogamicin have been elucidated. On September 1, 2017, the FDA approved Pfizer's Mylotarg. Gemtuzumab ozogamicin was initially approved by the U.S. Food and Drug Administration in 2000. However, during post
https://en.wikipedia.org/wiki/Cross-platform%20play
In video games with online gaming functionality, also called cross-compatible play, cross-platform play, crossplay, or cross-play describes the ability of players using different video game hardware to play with each other simultaneously. It is commonly applied to the ability for players using a game on a specific video game console to play alongside a player on a different hardware platform such as another console or a computer. A related concept is cross-save, where the player's progress in a game is stored in separate servers, and can be continued in the game but on a different hardware platform. Cross-play is related to but distinct from the notion of cross-platform development, which uses software languages and tools to enable deployment of software on multiple platforms. Cross-platform play is also a distinct concept from the ability to allow a player to play a game on different hardware platforms, often only having to purchase the title for one single system to have access to it on other systems, and retaining their progress in the game through the use of cloud storage or similar techniques. Cross-platform play, while technically feasible with today's computer hardware, generally is impeded by two factors. One factor is the difference in control schemes between personal computers and consoles, with the keyboard-and-mouse controls typically giving computer players an advantage that cannot be easily remedied. The second factor relates to the closed online services used on consoles that are designed to provide a safe and consistent environment for its players that require the businesses' cooperation to open up for cross-platform play. Up through September 2018, Sony Interactive Entertainment had restricted PlayStation 4 cross-platform play with other consoles, creating a rift between players of popular games like Rocket League and Fortnite Battle Royale. In September 2018, Sony changed their stance, and had opened up beta-testing for Fortnite cross-platform pl
https://en.wikipedia.org/wiki/Shq1
Shq1p is a protein involved in the rRNA processing pathway. It was discovered by Pok Yang in the Chanfreau laboratory at UCLA. Depletion of Shq1p has led to decreased level of various H/ACA box snoRNAs (H/ACA box snoRNAs are responsible for pseuduridylation of pre-rRNA) and certain pre-rRNA intermediates. Background During the synthesis of eukaryotic ribosomes, four mature ribosomal RNAs (the 5S, 5.8S, 18S, and 25S) must be synthesized. Three of these rRNAs (5.8S, 18S, and 25S) come from a single pre-rRNA known as the 35S. Although many of the intermediates in this rRNA processing pathway have been identified in the last thirty years, there are still a number of proteins involved in this process whose specific function is unknown. Function Shq1, a protein thought to play a role in the stabilization and/or production of box H/ACA snoRNA, is still uncharacterized. It has been proposed that Shq1, along with Naf1p, is involved in the initial steps of the biogenesis of H/ACA box snoRNPs (box H/ACA snoRNAs form complexes with proteins, thereby forming snoRNPs) because of its association with certain snoRNP proteins during the snoRNP’s maturation, while showing very little association with the mature snoRNP complex. Despite the known involvement of Shq1 in H/ACA box snoRNP's production, the exact function of this protein in the overall rRNA processing pathway is still unknown. See also rRNA snoRNA Ribosomes Eukaryotic translation Proteins
https://en.wikipedia.org/wiki/FCGR2A
Low affinity immunoglobulin gamma Fc region receptor II-a is a protein that in humans is encoded by the FCGR2A gene. Interactions FCGR2A has been shown to interact with PIK3R1 and Syk. See also CD32
https://en.wikipedia.org/wiki/Intel%20Core%20%28microarchitecture%29
The Intel Core microarchitecture (provisionally referred to as Next Generation Micro-architecture, and developed as Merom) is a multi-core processor microarchitecture launched by Intel in mid-2006. It is a major evolution over the Yonah, the previous iteration of the P6 microarchitecture series which started in 1995 with Pentium Pro. It also replaced the NetBurst microarchitecture, which suffered from high power consumption and heat intensity due to an inefficient pipeline designed for high clock rate. In early 2004 the new version of NetBurst (Prescott) needed very high power to reach the clocks it needed for competitive performance, making it unsuitable for the shift to dual/multi-core CPUs. On May 7, 2004 Intel confirmed the cancellation of the next NetBurst, Tejas and Jayhawk. Intel had been developing Merom, the 64-bit evolution of the Pentium M, since 2001, and decided to expand it to all market segments, replacing NetBurst in desktop computers and servers. It inherited from Pentium M the choice of a short and efficient pipeline, delivering superior performance despite not reaching the high clocks of NetBurst. The first processors that used this architecture were code-named 'Merom', 'Conroe', and 'Woodcrest'; Merom is for mobile computing, Conroe is for desktop systems, and Woodcrest is for servers and workstations. While architecturally identical, the three processor lines differ in the socket used, bus speed, and power consumption. The first Core-based desktop and mobile processors were branded Core 2, later expanding to the lower-end Pentium Dual-Core, Pentium and Celeron brands; while server and workstation Core-based processors were branded Xeon. Features The Core microarchitecture returned to lower clock rates and improved the use of both available clock cycles and power when compared with the preceding NetBurst microarchitecture of the Pentium 4 and D-branded CPUs. The Core microarchitecture provides more efficient decoding stages, execution units,
https://en.wikipedia.org/wiki/Plasma%20cleaning
Plasma cleaning is the removal of impurities and contaminants from surfaces through the use of an energetic plasma or dielectric barrier discharge (DBD) plasma created from gaseous species. Gases such as argon and oxygen, as well as mixtures such as air and hydrogen/nitrogen are used. The plasma is created by using high frequency voltages (typically kHz to >MHz) to ionise the low pressure gas (typically around 1/1000 atmospheric pressure), although atmospheric pressure plasmas are now also common. Methods In plasma, gas atoms are excited to higher energy states and also ionized. As the atoms and molecules 'relax' to their normal, lower energy states they release a photon of light, this results in the characteristic “glow” or light associated with plasma. Different gases give different colors. For example, oxygen plasma emits a light blue color. A plasma’s activated species include atoms, molecules, ions, electrons, free radicals, metastables, and photons in the short wave ultraviolet (vacuum UV, or VUV for short) range. This mixture then interacts with any surface placed in the plasma. If the gas used is oxygen, the plasma is an effective, economical, environmentally safe method for critical cleaning. The VUV energy is very effective in the breaking of most organic bonds (i.e., C–H, C–C, C=C, C–O, and C–N) of surface contaminants. This helps to break apart high molecular weight contaminants. A second cleaning action is carried out by the oxygen species created in the plasma (O2+, O2−, O3, O, O+, O−, ionised ozone, metastable excited oxygen, and free electrons). These species react with organic contaminants to form H2O, CO, CO2, and lower molecular weight hydrocarbons. These compounds have relatively high vapor pressures and are evacuated from the chamber during processing. The resulting surface is ultra-clean. In Fig. 2, a relative content of carbon over material depth is shown before and after cleaning with excited oxygen [1]. If the part consists of easily oxi
https://en.wikipedia.org/wiki/History%20of%20Sinhala%20software
Sinhala language software for computers have been present since the late 1980s (Samanala written in C) but no standard character representation system was put in place which resulted in proprietary character representation systems and fonts. In the wake of this CINTEC (Computer and Information Technology Council of Sri Lanka) introduced Sinhala within the UNICODE (16‑bit character technology) standard. ICTA concluded the work started by CINTEC for approving and standardizing Sinhala Unicode in Sri Lanka. Timeline 1980–1989 1985 CINTEC establishes a committee for the use of Sinhala & Tamil in Computer Technology. 1987 "DOS WordPerfect" Reverend Gangodawila Soma Thero, who was the chief incumbent at the Springvale Buddhist temple in Melbourne, Australia asked the Lay members of the temple to produce a Monthly Newsletter for the temple in Sinhala, called "Bodu Puwath". A lay person named Jayantha de Silva developed two HP PCL Sinhala fonts called Lihil and an intelligent Phonetic keyboard that was able to select letters based on context, together with a printer driver and screen fonts. All this was possible because the utilities to create the keyboard and printer driver were supplied with WordPerfect. It was easy to use and was installed in many PCs owned by lay members and in the temple PC for typing articles. The program fell into disuse after Windows came online in 1990 as it did not support the WordPerfect macro keyboard. 1988 "Super77" First trilingual word processor (DOS based) initially developed at "Super Bits Computer Systems", Katunayake and further improved up to the commercial level at IFS kandy (by Rohan Manamudali & Sampath Godamunne, under Prof. Cyril Ponnamperuma). Later it was named as "THIBUS Trilingual Software System" (Windows based). 1989 "WadanTharuwa" (means WordStar in Sinhala) developed by the University of Colombo. It was one of the first commercial Sinhala word processing software products. Gives inspiration to a new generation
https://en.wikipedia.org/wiki/Computer%20chess
Computer chess includes both hardware (dedicated computers) and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess grandmaster or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, GNU Chess, Fruit, and other free open source applications are available for various platforms. Computer chess applications, whether implemented in hardware or software, utilize different strategies than humans to choose their moves: they use heuristic methods to build, search and evaluate trees representing sequences of moves from the current position and attempt to execute the best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective. The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in the vacuum-tube computer age (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997, chess engines running on super-computers or specialized hardware were capable of defeating even the best human players. By 2006, programs running on desktop PCs had attained the same capability. In 2006, Monty Newborn, Professor of Computer Science at McGill University, declared: "the science has been done". Nevertheless, solving chess is not currently possible for modern computers due to the game's extremely large number of possible variations. Computer chess was once consider
https://en.wikipedia.org/wiki/Face%20Animation%20Parameter
A Face Animation Parameter (FAP) is a component of the MPEG-4 Face and Body Animation (FBA) International Standard (ISO/IEC 14496-1 & -2) developed by the Moving Pictures Experts Group. It describes a standard for virtually representing humans and humanoids in a way that adequately achieves visual speech intelligibility as well as the mood and gesture of the speaker, and allows for very low bitrate compression and transmission of animation parameters. FAPs control key feature points on a face model mesh that are used to produce animated visemes and facial expressions, as well as head and eye movement. These feature points are part of the Face Definition Parameters (FDPs) also defined in the MPEG-4 standard. FAPs represent 66 displacements and rotations of the feature points from the neutral face position, which is defined as: mouth closed, eyelids tangent to the iris, gaze and head orientation straight ahead, teeth touching, and tongue touching teeth. These FAPs were designed to be closely related to human facial muscle movements. In addition to animation, FAPs are used in automatic speech recognition, and biometrics.
https://en.wikipedia.org/wiki/Join-calculus
The join-calculus is a process calculus developed at INRIA. The join-calculus was developed to provide a formal basis for the design of distributed programming languages, and therefore intentionally avoids communications constructs found in other process calculi, such as rendezvous communications, which are difficult to implement in a distributed setting. Despite this limitation, the join-calculus is as expressive as the full π-calculus. Encodings of the π-calculus in the join-calculus, and vice versa, have been demonstrated. The join-calculus is a member of the π-calculus family of process calculi, and can be considered, at its core, an asynchronous π-calculus with several strong restrictions: Scope restriction, reception, and replicated reception are syntactically merged into a single construct, the definition; Communication occurs only on defined names; For every defined name there is exactly one replicated reception. However, as a language for programming, the join-calculus offers at least one convenience over the π-calculus — namely the use of multi-way join patterns, the ability to match against messages from multiple channels simultaneously. Implementations Languages based on the join-calculus The join-calculus programming language is a new language based on the join-calculus process calculus. It is implemented as an interpreter written in OCaml, and supports statically typed distributed programming, transparent remote communication, agent-based mobility, and some failure-detection. Though not explicitly based on join-calculus, the rule system of CLIPS implements it if every rule deletes its inputs when triggered (retracts the relevant facts when fired). Many implementations of the join-calculus were made as extensions of existing programming languages: JoCaml is a version of OCaml extended with join-calculus primitives Polyphonic C# and its successor Cω extend C# MC# and Parallel C# extend Polyphonic C# Join Java extends Java A Concurrent Basic
https://en.wikipedia.org/wiki/Reptation%20Monte%20Carlo
Reptation Monte Carlo is a quantum Monte Carlo method. It is similar to Diffusion Monte Carlo, except that it works with paths rather than points. This has some advantages relating to calculating certain properties of the system under study that diffusion Monte Carlo has difficulty with. In both diffusion Monte Carlo and reptation Monte Carlo, the method first aims to solve the time-dependent Schrödinger equation in the imaginary time direction. When you propagate the Schrödinger equation in time, you get the dynamics of the system under study. When you propagate it in imaginary time, you get a system that tends towards the ground state of the system. When substituting in place of , the Schrodinger equation becomes identical with a diffusion equation. Diffusion equations can be solved by imagining a huge population of particles (sometimes called "walkers"), each diffusing in a way that solves the original equation. This is how diffusion Monte Carlo works. Reptation Monte Carlo works in a very similar way, but is focused on the paths that the walkers take, rather than the density of walkers. In particular, a path may be mutated using a Metropolis algorithm which tries a change (normally at one end of the path) and then accepts or rejects the change based on a probability calculation. The update step in diffusion Monte Carlo would be moving the walkers slightly, and then duplicating and removing some of them. By contrast, the update step in reptation Monte Carlo mutates a path, and then accepts or rejects the mutation.
https://en.wikipedia.org/wiki/Otroeda%20nerina
Otroeda nerina is a species of moth in the tussock-moth subfamily Lymantriinae. It was first described by Dru Drury in 1782 from Sierra Leone, and is also found in Cameroon, DR Congo, Gabon, Ghana and Nigeria. Description Upperside. Antennae strongly pectinated and brown. Head brown, the front being white. Thorax brown, with two white streaks along it. Abdomen brown. Wings black, streaked with light brown from the shoulders along the tendons, and two light yellowish patches, almost crossing the wings from the anterior edges, with a row of white spots placed along the external edges. Posterior wings dark yellow, with a deep black border running along the external edges from the upper to the abdominal corners. Underside. Palpi black. Mouth white. Neck and breast yellow. Legs brown, and yellow at top, and white beneath. Abdomen white, streaked longitudinally with brown. Anus yellow. Wings coloured as on the upperside, but brighter. Margins of all the wings entire. Wingspan inches (88 mm).
https://en.wikipedia.org/wiki/Hash%20function%20security%20summary
This article summarizes publicly known attacks against cryptographic hash functions. Note that not all entries may be up to date. For a summary of other hash function parameters, see comparison of cryptographic hash functions. Table color key Common hash functions Collision resistance Chosen prefix collision attack Preimage resistance Length extension Vulnerable: MD5, SHA1, SHA256, SHA512 Not vulnerable: SHA384, SHA-3, BLAKE2 Less-common hash functions Collision resistance Preimage resistance Attacks on hashed passwords Hashes described here are designed for fast computation and have roughly similar speeds. Because most users typically choose short passwords formed in predictable ways, passwords can often be recovered from their hashed value if a fast hash is used. Searches on the order of 100 billion tests per second are possible with high-end graphics processors. Special hashes called key derivation functions have been created to slow brute force searches. These include pbkdf2, bcrypt, scrypt, argon2, and balloon. See also Comparison of cryptographic hash functions Cryptographic hash function Collision attack Preimage attack Length extension attack Cipher security summary
https://en.wikipedia.org/wiki/The%20Rocklopedia%20Fakebandica
The Rocklopedia Fakebandica, by T. Mike Childs, is an illustrated encyclopedia of fictional musical groups and musicians, as seen in movies and television. It was officially released November 6, 2004. The book catalogs such better-known fake bands as Spinal Tap, The Blues Brothers, The Rutles, and The Chipmunks, along with dozens of less well known ones. The book takes a light-hearted, humorous approach, often pointing out the discrepancies between the experiences of real bands and musicians and the unlikely adventures fictional ones have. The book grew out of a website started by the author in 2000. The website includes fictional bands from other sources, such as books and TV commercials, as well as many bands not found in the book. External links Official site 2004 non-fiction books Online encyclopedias Popular culture books Encyclopedias of music
https://en.wikipedia.org/wiki/Bariloche%20Atomic%20Centre
The Bariloche Atomic Centre () is one of the research and development centres of the Argentine National Atomic Energy Commission. As its name implies, it is located in the city of San Carlos de Bariloche. Bariloche Atomic Centre is responsible for research in physics and nuclear engineering. It also hosts the Balseiro Institute, a collaboration between National University of Cuyo and the National Atomic Energy Commission. The Bariloche Atomic Centre opened in 1955 with its first director, José Antonio Balseiro. The RA-6 reactor started operations in 1982. Activity The centre is devoted to basic and applied physics research as well as Nuclear and Mechanical Engineering. Basic research is focused on deepening understanding of nuclear energy. Applied sciences have provided support for both state- and privately owned companies. The main areas of research include: materials, neutrons, thermodynamics and theoretical physics. Nuclear Engineering at the Centre is aimed at further developing Argentina's atomic technology. Most of the research is done taking advantage of RA-6, a 1 MW experimental reactor. Experiments done with the RA-6 include irradiation and radioactive activation of several materials. Some research groups also focus on refining reactor calculations and performance measurements and designing mechanical devices for those tasks. Various companies have sprung out of the Bariloche Atomic Center, such as INVAP y ALTEC. Buildings and structures in Río Negro Province Research institutes in Argentina Nuclear technology in Argentina Nuclear research institutes Nuclear power in Argentina Bariloche 1955 establishments in Argentina
https://en.wikipedia.org/wiki/Johnny%20Castaway
Johnny Castaway is a screensaver released in 1992 by Sierra On-Line/Dynamix, and marketed under the Screen Antics brand as "the world's first story-telling screen saver". The screensaver depicts a man, Johnny Castaway, stranded on a very small island with a single palm tree. It follows a story which is slowly revealed through time. While Johnny fishes, builds sand castles, and jogs on a regular basis, other events are seen less frequently, such as a mermaid or Lilliputian pirates coming to the island, or a seagull swooping down to steal his shorts while he is bathing. Much like the castaways of Gilligan's Island, Johnny repeatedly comes close to being rescued, but ultimately remains on the island as a result of various unfortunate accidents. "Johnny Castaway" includes Easter eggs for a number of United States holidays such as Halloween, Christmas and Independence Day. During these holidays, the scenes are played out as usual except for some detail representing that holiday or event. During the last week of the year, for example, the palm tree will sport a "Happy New Year" banner, and on Halloween a jack-o'-lantern can be seen in the sand. The screensaver can be manipulated into showing these features by adjusting the computer clock to correspond with the date of the event. The Johnny Castaway screensaver was distributed on a 3½-inch floppy disk and required a computer with a 386SX processor and Windows 3.1 as its operating system. Today, it is widely available on the internet, but as it relies on outdated 16-bit software components, it will only work on older versions of the Microsoft Windows operating system, although workarounds exist for getting the screensaver to run on Windows 64-bit, Mac OS X and Linux. Character design was done by Shawn Bird while he was at Dynamix. The program had been developed at Jeff Tunnell Productions, the eponymous company of the original founder of Dynamix. According to Ken Williams, the screensaver was one of several products by
https://en.wikipedia.org/wiki/Trumpet%20Winsock
Trumpet Winsock is a TCP/IP stack for Windows 3.x that implemented the Winsock API, which is an API for network sockets. It was developed by Peter Tattam from Trumpet Software International and distributed as shareware software. History The first version, 1.0A, was released in 1994. It rapidly gained reputation as the best tool for connecting to the internet. Guides for internet connectivity commonly advised to use Trumpet Winsock. The author received very little financial compensation for developing the software. In 1996, a 32-bit version was released. Lawsuit In the Trumpet Software Pty Ltd. v OzEmail Pty Ltd. case, the defendant had distributed Trumpet Winsock for free with a magazine. It did also suppress notices that the software was developed by Trumpet Software. Replacement by Microsoft Windows 95 includes an IPv4 stack but it is not installed by default. An early version of this IPv4 stack, codenamed Wolverine, was released by Microsoft for Windows for Workgroups in 1994. Microsoft also released Internet Explorer 5 for Windows 3.x with an included dialer application for calling the modem pool of a dial-up Internet service provider. The Wolverine stack does not include a dialer but another computer on the same LAN may make a dialed connection or a dialer not included with Wolverine may be used on the computer using Wolverine. Architecture The binary for Trumpet Winsock is called TCPMAN.EXE. Other files included the main winsock.dll and three UCSC connection .cmd file scripts.
https://en.wikipedia.org/wiki/Abstract%20economy
In theoretical economics, an abstract economy (also called a generalized N-person game) is a model that generalizes both the standard model of an exchange economy in microeconomics, and the standard model of a game in game theory. An equilibrium in an abstract economy generalizes both a Walrasian equilibrium in microeconomics, and a Nash equilibrium in game-theory. The concept was introduced by Gérard Debreu in 1952. He named it generalized N-person game, and proved the existence of equilibrium in this game. Later, Debreu and Kenneth Arrow (who renamed the concept to abstract economy) used this existence result to prove the existence of a Walrasian equilibrium (aka competitive equilibrium) in the Arrow–Debreu model. Later, Shafer and Sonnenschein extended both theorems to irrational agents - agents with non-transitive and non-complete preferences. Abstract economy with utility functions The general case Definition In the model of Debreu, an abstract economy contains a finite number N of agents. For each agent , there is: A choice-set (a subset of some Euclidean space ). This represents the global set of choices that the agent can make. We define the cartesian product of all choice sets as: . An action-correspondence . This represents the set of possible actions the agent can take, given the choices of the other agents. A utility function: , representing the utility that the agent receives from each combination of choices. The goal of each agent is to choose an action that maximizes his utility. Equilibrium An equilibrium in an abstract economy is a vector of choices, , such that, for each agent , the action maximizes the function subject to the constraint :Equivalently, for each agent , there is no action such that: The following conditions are sufficient for the existence of equilibrium: Each choice-set is compact, non-empty and convex. Each action-correspondence is continuous, and its values are non-empty and convex. Each utility function
https://en.wikipedia.org/wiki/Marklund%20convection
Marklund convection, named after Swedish physicist Göran Marklund, is a convection process that takes place in filamentary currents of plasma. It occurs within a plasma with an associated electric field, that causes convection of ions and electrons inward towards a central twisting filamentary axis. A temperature gradient within the plasma will also cause chemical separation based on different ionization potentials. Mechanism In Marklund's paper, the plasma convects radially inwards towards the center of a cylindrical flux tube. During this convection, the different chemical constituents of the plasma, each having its specific ionization potential, enters into a progressively cooler region. The plasma constituents will recombine and become neutral, and thus no longer under the influence of the electromagnetic forcing. The ionization potentials will thus determine where the different chemicals will be deposited. This provides an efficient means to accumulate matter within a plasma. In a partially ionized plasma, electromagnetic forces act on the non-ionized material indirectly through the viscosity between the ionized and non-ionized material. Hannes Alfvén showed that elements with the lowest ionization potential are brought closest to the axis, and form concentric hollow cylinders whose radii increase with ionization potential. The drift of ionized matter from the surroundings into the rope means that the rope acts as an ion pump, which evacuates surrounding regions, producing areas of extremely low density. See also QCD string, sometimes called a flux tube Flux transfer event Birkeland current Magnetohydrodynamics (MHD)
https://en.wikipedia.org/wiki/CDMF
In cryptography, CDMF (Commercial Data Masking Facility) is an algorithm developed at IBM in 1992 to reduce the security strength of the 56-bit DES cipher to that of 40-bit encryption, at the time a requirement of U.S. restrictions on export of cryptography. Rather than a separate cipher from DES, CDMF constitutes a key generation algorithm, called key shortening. It is one of the cryptographic algorithms supported by S-HTTP. Algorithm Like DES, CDMF accepts a 64-bit input key, but not all bits are used. The algorithm consists of the following steps: Clear bits 8, 16, 24, 32, 40, 48, 56, 64 (ignoring these bits as DES does). XOR the result with its encryption under DES using the key 0xC408B0540BA1E0AE. Clear bits 1, 2, 3, 4, 8, 16, 17, 18, 19, 20, 24, 32, 33, 34, 35, 36, 40, 48, 49, 50, 51, 52, 56, 64. Encrypt the result under DES using the key 0xEF2C041CE6382FE6. The resulting 64-bit data is to be used as a DES key. Due to step 3, a brute force attack needs to test only 240 possible keys.
https://en.wikipedia.org/wiki/Anthropological%20Index%20Online
The Anthropological Index Online is an academic journal indexing service for anthropology. Overview The service indexes the journals received by The Anthropology Library at The British Museum (formerly at the Museum of Mankind), which receives periodicals in all branches of anthropology from academic institutions and publishers around the world. It is a collaboration between the Royal Anthropological Institute of Great Britain and Ireland and the Anthropology Department at the University of Kent. It is also available under licence from EBSCO Information Services as part of Anthropology Plus. There are several hundred thousand records to date, the earliest from the late 1950s. Subject coverage is cultural anthropology/social anthropology, physical anthropology, archaeology and linguistics. The index is regularly updated. See also List of academic databases and search engines External links Online databases British Museum Anthropology literature
https://en.wikipedia.org/wiki/Java%204K%20Game%20Programming%20Contest
The Java 4K Game Programming Contest, also known as Java 4K and J4K, is an informal contest that was started by the Java Game Programming community to challenge their software development abilities. Concept The goal of the contest is to develop the best game possible within four kibibytes (4096 bytes) of data. While the rules originally allowed for nearly any distribution method, recent years have required that the games be packaged as either an executable JAR file, a Java Webstart application, or a Java Applet, and now only an applet. Because the Java class file format incurs quite a bit of overhead, creating a complete game in 4K can be quite a challenge. As a result, contestants must choose how much of their byte budget they wish to spend on graphics, sound, and gameplay. Finding the best mix of these factors can be extremely difficult. Many new entrants believe that impressive graphics alone are enough to carry a game. However, entries with more modest graphics and focus on gameplay have regularly scored higher than such technology demonstrations. Prizes When first conceived, the "prize" for winning the contest was a bundle of "Duke Dollars", a virtual currency used on Sun Microsystems' Java forums. This currency could theoretically be redeemed for physical prizes such as watches and pens. The artificial currency was being downplayed by the introduction of the 4K contest, thus leaving no real prize at all. While there has been some discussion of providing prizes for the contest, it has continued to thrive without them. Spin-offs Following the creation of the Java4K contest, spin-offs targeting 8K, 16K, or a specific API like LWJGL have been launched, usually without success. While there has been a great deal of debate on why the Java 4K contest is so successful, the consensus from the contestants seems to be that it provides a very appealing challenge: not only do the entrants get the chance to show off how much they know about Java programming, but th
https://en.wikipedia.org/wiki/Local%20ternary%20patterns
Local ternary patterns (LTP) are an extension of local binary patterns (LBP). Unlike LBP, it does not threshold the pixels into 0 and 1, rather it uses a threshold constant to threshold pixels into three values. Considering k as the threshold constant, c as the value of the center pixel, a neighboring pixel p, the result of threshold is: In this way, each thresholded pixel has one of the three values. Neighboring pixels are combined after thresholding into a ternary pattern. Computing a histogram of these ternary values will result in a large range, so the ternary pattern is split into two binary patterns. Histograms are concatenated to generate a descriptor double the size of LBP. See also Local binary patterns
https://en.wikipedia.org/wiki/Message%20of%20the%20day
Many computer systems display a message of the day or welcome message when a user first connects to them, logs in to them, or starts them. It is a way of sending a common message to all users, and may include information about system changes, system availability, and so on. More recently, systems have displayed personalized messages of the day. On many time-sharing systems, the contents of the message of the day are fetched from a system file: Compatible Time-Sharing System; Multics: the motd info segment; TOPS-10 Incompatible Timesharing System (ITS) Unix-like systems: the /etc/motd file, though most modern Linux distributions do not support the file. Univac VS/9 CP/CMS Usage The contents of the special file are displayed after the user logs in successfully, typically before the login shell is started. Newer Unix-like systems may generate the message dynamically when the host boots or a user logs in. Various server-based PC games display messages of the day, including Half-Life, Call of Duty, Minecraft, and Battlefield. They may be personalized, encouraging users to try new features or make in-game purchases. Some IRC servers also display a message of the day on login. See also System console Fortune_(Unix)
https://en.wikipedia.org/wiki/Embalming%20chemicals
Embalming chemicals are a variety of preservatives, sanitising and disinfectant agents, and additives used in modern embalming to temporarily prevent decomposition and restore a natural appearance for viewing a body after death. A mixture of these chemicals is known as embalming fluid and is used to preserve bodies of deceased persons for both funeral purposes and in medical research in anatomical laboratories. The period for which a body is embalmed is dependent on time, expertise of the embalmer and factors regarding duration of stay and purpose. Typically, embalming fluid contains a mixture of formaldehyde, glutaraldehyde, methanol, and other solvents. The formaldehyde content generally ranges from 5 to 37 percent and the methanol content may range from 9 to 56 percent. In the United States alone, about 20 million liters (roughly 5.3 million gallons) of embalming fluid are used every year. How they work Embalming fluid acts to fix (denature) cellular proteins, meaning that they cannot act as a nutrient source for bacteria; embalming fluid also kills the bacteria themselves. Formaldehyde or glutaraldehyde fixes tissue or cells by irreversibly connecting a primary amine group in a protein molecule with a nearby nitrogen in a protein or DNA molecule through a -CH2- linkage called a Schiff base. The end result also creates the simulation, via color changes, of the appearance of blood flowing under the skin. Modern embalming is not done with a single fixative. Instead, various chemicals are used to create a mixture, called an arterial solution, which is uniquely generated for the needs of each case. For example, a body needing to be repatriated overseas needs a higher index (percentage of diluted preservative chemical) than one simply for viewing (known in the United States and Canada as a funeral visitation) at a funeral home before cremation or burial. Process Embalming fluid is injected into the arterial system of the deceased's abdomen and a trocar is inse
https://en.wikipedia.org/wiki/Pulse%20wave
A pulse wave or pulse train is a type of non-sinusoidal waveform that includes square waves (duty cycle of 50%) and similarly periodic but asymmetrical waves (duty cycles other than 50%). It is a term used in synthesizer programming, and is a typical waveform available on many synthesizers. The exact shape of the wave is determined by the duty cycle or pulse width of the oscillator output. In many synthesizers, the duty cycle can be modulated (pulse-width modulation) for a more dynamic timbre. The pulse wave is also known as the rectangular wave, the periodic version of the rectangular function. The average level of a rectangular wave is also given by the duty cycle, therefore by varying the on and off periods and then averaging these said periods, it is possible to represent any value between the two limiting levels. This is the basis of pulse-width modulation. Frequency-domain representation The Fourier series expansion for a rectangular pulse wave with period , amplitude and pulse length is where . Equivalently, if duty cycle is used, and : Note that, for symmetry, the starting time () in this expansion is halfway through the first pulse. Alternatively, can be written using the Sinc function, using the definition , as or with as Generation A pulse wave can be created by subtracting a sawtooth wave from a phase-shifted version of itself. If the sawtooth waves are bandlimited, the resulting pulse wave is bandlimited, too. A single ramp wave (sawtooth or triangle) applied to an input of a comparator produces a pulse wave that is not bandlimited. A voltage applied to the other input of the comparator determines the pulse width. Applications The harmonic spectrum of a pulse wave is determined by the duty cycle. Acoustically, the rectangular wave has been described variously as having a narrow/thin, nasal/buzzy/biting, clear, resonant, rich, round and bright sound. Pulse waves are used in many Steve Winwood songs, such as "While You See a Chance". In d
https://en.wikipedia.org/wiki/Q-Vandermonde%20identity
In mathematics, in the field of combinatorics, the q-Vandermonde identity is a q-analogue of the Chu–Vandermonde identity. Using standard notation for q-binomial coefficients, the identity states that The nonzero contributions to this sum come from values of j such that the q-binomial coefficients on the right side are nonzero, that is, Other conventions As is typical for q-analogues, the q-Vandermonde identity can be rewritten in a number of ways. In the conventions common in applications to quantum groups, a different q-binomial coefficient is used. This q-binomial coefficient, which we denote here by , is defined by In particular, it is the unique shift of the "usual" q-binomial coefficient by a power of q such that the result is symmetric in q and . Using this q-binomial coefficient, the q-Vandermonde identity can be written in the form Proof As with the (non-q) Chu–Vandermonde identity, there are several possible proofs of the q-Vandermonde identity. The following proof uses the q-binomial theorem. One standard proof of the Chu–Vandermonde identity is to expand the product in two different ways. Following Stanley, we can tweak this proof to prove the q-Vandermonde identity, as well. First, observe that the product can be expanded by the q-binomial theorem as Less obviously, we can write and we may expand both subproducts separately using the q-binomial theorem. This yields Multiplying this latter product out and combining like terms gives Finally, equating powers of between the two expressions yields the desired result. This argument may also be phrased in terms of expanding the product in two different ways, where A and B are operators (for example, a pair of matrices) that "q-commute," that is, that satisfy BA = qAB. Notes
https://en.wikipedia.org/wiki/Abel%E2%80%93Plana%20formula
In mathematics, the Abel–Plana formula is a summation formula discovered independently by and . It states that For the case we have It holds for functions ƒ that are holomorphic in the region Re(z) ≥ 0, and satisfy a suitable growth condition in this region; for example it is enough to assume that |ƒ| is bounded by C/|z|1+ε in this region for some constants C, ε > 0, though the formula also holds under much weaker bounds. . An example is provided by the Hurwitz zeta function, which holds for all , . Another powerful example is applying the formula to the function : we obtain where is the gamma function, is the polylogarithm and . Abel also gave the following variation for alternating sums: which is related to the Lindelöf summation formula Proof Let be holomorphic on , such that , and for , . Taking with the residue theorem Then Using the Cauchy integral theorem for the last one. thus obtaining This identity stays true by analytic continuation everywhere the integral converges, letting we obtain the Abel–Plana formula The case ƒ(0) ≠ 0 is obtained similarly, replacing by two integrals following the same curves with a small indentation on the left and right of 0. See also Euler–Maclaurin summation formula Euler–Boole summation Ramanujan summation
https://en.wikipedia.org/wiki/Nomen%20illegitimum
Nomen illegitimum (Latin for illegitimate name) is a technical term, used mainly in botany. It is usually abbreviated as nom. illeg. Although the International Code of Nomenclature for algae, fungi, and plants uses Latin terms for other kinds of name (e.g. nomen conservandum for "conserved name"), the glossary defines the English phrase "illegitimate name" rather than the Latin equivalent. However, the Latin abbreviation is widely used by botanists and mycologists. A superfluous name is often an illegitimate name. Again, although the glossary defines the English phrase, the Latin equivalent nomen superfluum, abbreviated nom. superfl. is widely used by botanists. Definition A nomen illegitimum is a validly published name, but one that contravenes some of the articles laid down by the International Botanical Congress. The name could be illegitimate because: (article 52) it was superfluous at its time of publication, i.e., the taxon (as represented by the type) already has a name, or (articles 53 and 54) the name has already been applied to another plant (a homonym). For the procedure of rejecting otherwise legitimate names, see conserved name. The qualification above concerning the taxon and the type is important. A name can be superfluous but not illegitimate if it would be legitimate for a different circumscription. For example, the family name Salicaceae, based on the "type genus" Salix, was published by Charles-François Brisseau de Mirbel in 1815. So when in 1818 Lorenz Chrysanth von Vest published the name Carpinaceae (based on the genus Carpinus) for a family explicitly including the genus Salix, it was superfluous: "Salicaceae" was already the correct name for Vest's circumscription; "Carpinaceae" is superfluous for a family containing Salix. However, the name is not illegitimate, since Carpinus is a legitimate name. If Carpinus were in future placed in a family where no genus had been used as the basis for a family name earlier than Vest's name (e.g.
https://en.wikipedia.org/wiki/Modern%20searches%20for%20Lorentz%20violation
Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity. Lorentz violations concern the fundamental predictions of special relativity, such as the principle of relativity, the constancy of the speed of light in all inertial frames of reference, and time dilation, as well as the predictions of the standard model of particle physics. To assess and predict possible violations, test theories of special relativity and effective field theories (EFT) such as the Standard-Model Extension (SME) have been invented. These models introduce Lorentz and CPT violations through spontaneous symmetry breaking caused by hypothetical background fields, resulting in some sort of preferred frame effects. This could lead, for instance, to modifications of the dispersion relation, causing differences between the maximal attainable speed of matter and the speed of light. Both terrestrial and astronomical experiments have been carried out, and new experimental techniques have been introduced. No Lorentz violations have been measured thus far, and exceptions in which positive results were reported have been refuted or lack further confirmations. For discussions of many experiments, see Mattingly (2005). For a detailed list of results of recent experimental searches, see Kostelecký and Russell (2008–2013). For a recent overview and history of Lorentz violating models, see Liberati (2013). Assessing Lorentz invariance violations Early models assessing the possibility of slight deviations from Lorentz invariance have been published between the 1960s and the 1990s. In ad
https://en.wikipedia.org/wiki/Multiple-scale%20analysis
In mathematics and physics, multiple-scale analysis (also called the method of multiple scales) comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions. Mathematics research from about the 1980s proposes that coordinate transforms and invariant manifolds provide a sounder support for multiscale modelling (for example, see center manifold and slow manifold). Example: undamped Duffing equation Differential equation and energy conservation As an example for the method of multiple-scale analysis, consider the undamped and unforced Duffing equation: which is a second-order ordinary differential equation describing a nonlinear oscillator. A solution y(t) is sought for small values of the (positive) nonlinearity parameter 0 < ε ≪ 1. The undamped Duffing equation is known to be a Hamiltonian system: with q = y(t) and p = dy/dt. Consequently, the Hamiltonian H(p, q) is a conserved quantity, a constant, equal to H = ½ + ¼ ε for the given initial conditions. This implies that both y and dy/dt have to be bounded: Straightforward perturbation-series solution A regular perturbation-series approach to the problem proceeds by writing and substituting this into the undamped Duffing equation. Matching powers of gives the system of equations Solving these subject to the initial conditions yields Note that the last term between the square braces is secular: it grows without bound for large |t|.
https://en.wikipedia.org/wiki/Lymphoma
Lymphoma is a group of blood and lymph tumors that develop from lymphocytes (a type of white blood cell). The name typically refers to just the cancerous versions rather than all such tumours. Signs and symptoms may include enlarged lymph nodes, fever, drenching sweats, unintended weight loss, itching, and constantly feeling tired. The enlarged lymph nodes are usually painless. The sweats are most common at night. Many subtypes of lymphomas are known. The two main categories of lymphomas are the non-Hodgkin lymphoma (NHL) (90% of cases) and Hodgkin lymphoma (HL) (10%). Lymphomas, leukemias and myelomas are a part of the broader group of tumors of the hematopoietic and lymphoid tissues. Risk factors for Hodgkin lymphoma include infection with Epstein–Barr virus and a history of the disease in the family. Risk factors for common types of non-Hodgkin lymphomas include autoimmune diseases, HIV/AIDS, infection with human T-lymphotropic virus, immunosuppressant medications, and some pesticides. In 2014, the International Agency for Research on Cancer updated its classification of trichloroethylene to Group 1, indicating that sufficient evidence exists that it causes cancer of the kidney in humans as well as some evidence of cancer of the liver and non-Hodgkin's lymphoma. Eating large amounts of red meat and tobacco smoking may also increase the risk. Diagnosis, if enlarged lymph nodes are present, is usually by lymph node biopsy. Blood, urine, and bone marrow testing may also be useful in the diagnosis. Medical imaging may then be done to determine if and where the cancer has spread. Lymphoma most often spreads to the lungs, liver, and brain. Treatment may involve one or more of the following: chemotherapy, radiation therapy, proton therapy, targeted therapy, and surgery. In some non-Hodgkin lymphomas, an increased amount of protein produced by the lymphoma cells causes the blood to become so thick that plasmapheresis is performed to remove the protein. Watchful waiti
https://en.wikipedia.org/wiki/Mesodinium%20nuclear%20code
The Mesodinium nuclear code (translation table 29) is a genetic code used by the nuclear genome of the ciliates Mesodinium and Myrionecta. The code (29)    AAs = FFLLSSSSYYYYCC*WLLLAPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = --------------*--------------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V). Differences from the standard code See also List of all genetic codes: translation tables 1 to 16, and 21 to 31. The genetic codes database.
https://en.wikipedia.org/wiki/Incidents%20at%20SeaWorld%20parks
This is a summary of notable incidents that have taken place at various SeaWorld Parks & Entertainment-owned amusement parks, water parks or theme parks. This list is not intended to be a comprehensive list of every such event, but only those that have a significant impact on the parks or park operations, or are otherwise significantly newsworthy. The term incidents refers to major accidents, injuries, or deaths that occur at a SeaWorld Parks facility. While these incidents were required to be reported to regulatory authorities due to where they occurred, they usually fall into one of the following categories: Caused by negligence on the part of a guest. This can be refusal to follow specific ride safety instructions, or deliberate intent to violate park rules. The result of a guest's known, or unknown, health issues. Negligence on the part of the park, either by ride operator or maintenance. Act of God or a generic accident (e.g. slipping and falling) that is not a direct result of an action on anybody's part. Adventure Island Tampa Bay On September 10, 2011, a 21-year-old lifeguard was killed after being struck by lightning while clearing guests from the Key West Rapids ride tower due to inclement weather. No injuries to the guests were reported. The park installed a system in place to warn of incoming weather. Aquatica Orlando, Florida location On October 4, 2010, a 68-year-old man from Manchester, England was found unresponsive on Roa's Rapids. He was taken to Dr. Phillips Hospital but was later pronounced dead on arrival. Preliminary findings found he died of natural causes. On July 15, 2017, a 58-year-old man from Savannah, Georgia was also found unresponsive on Roa's Rapids. He died the next day. It was later revealed he had a history of health problems. San Antonio, Texas location On July 1, 2018, a woman was found unresponsive after riding the Wahalla Wave water slide. She was given CPR by lifeguards before being taken to a nearby Christus Sa
https://en.wikipedia.org/wiki/Mobile%20daughter%20card
The mobile daughter card, also known as an MDC or CDC (communications daughter card), is a notebook version of the AMR slot on the motherboard of a desktop computer. It is designed to interface with special Ethernet (EDC), modem (MDC) or bluetooth (BDC) cards. Intel MDC specification 1.0 In 1999, Intel published a specification for mobile audio/modem daughter cards. The document defines a standard connector (AMP* 3-179397-0), mechanical elements including several form factors, and electrical interface. The 30-pin connector carries power, several audio channels and AC-Link serial data. Up to two AC'97 codecs are supported on such a card. Several form factors are specified: 45 × 27 mm 45 × 37 mm 55 × 27 mm with RJ11 jack 55 × 37 mm with RJ11 jack 45 × 55 mm 45 × 70 mm 30-pin AMP* 3-179397-0 pinout See also Daughter board External links intel.com – MDC specification.pdf Mobile computers Motherboard expansion slot
https://en.wikipedia.org/wiki/Decision-making
In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action. Research about decision-making is also published under the label problem solving, particularly in European psychological research. Overview Decision-making can be regarded as a problem-solving activity yielding a solution deemed to be optimal, or at least satisfactory. It is therefore a process which can be more or less rational or irrational and can be based on explicit or tacit knowledge and beliefs. Tacit knowledge is often used to fill the gaps in complex decision-making processes. Usually, both of these types of knowledge, tacit and explicit, are used together in the decision-making process. Human performance has been the subject of active research from several perspectives: Psychological: examining individual decisions in the context of a set of needs, preferences and values the individual has or seeks. Cognitive: the decision-making process is regarded as a continuous process integrated in the interaction with the environment. Normative: the analysis of individual decisions concerned with the logic of decision-making, or communicative rationality, and the invariant choice it leads to. A major part of decision-making involves the analysis of a finite set of alternatives described in terms of evaluative criteria. Then the task might be to rank these alternatives in terms of how attractive they are to the decision-maker(s) when all the criteria are considered simultaneously. Another task might be to find the best alternative or to determine the relative total priority of each a
https://en.wikipedia.org/wiki/Sonoporation
Sonoporation, or cellular sonication, is the use of sound in the ultrasonic range for increasing the permeability of the cell plasma membrane. This technique is usually used in molecular biology and non-viral gene therapy in order to allow uptake of large molecules such as DNA into the cell, in a cell disruption process called transfection or transformation. Sonoporation employs the acoustic cavitation of microbubbles to enhance delivery of these large molecules. The exact mechanism of sonoporation-mediated membrane translocation remains unclear, with a few different hypotheses currently being explored. Sonoporation is under active study for the introduction of foreign genes in tissue culture cells, especially mammalian cells. Sonoporation is also being studied for use in targeted Gene therapy in vivo, in a medical treatment scenario whereby a patient is given modified DNA, and an ultrasonic transducer might target this modified DNA into specific regions of the patient's body. The bioactivity of this technique is similar to, and in some cases found superior to, electroporation. Extended exposure to low-frequency (<MHz) ultrasound has been demonstrated to result in complete cellular death (rupturing), thus cellular viability must also be accounted for when employing this technique. Equipment Sonoporation is performed with a dedicated sonoporator. Sonoporation may also be performed with custom-built piezoelectric transducers connected to bench-top function generators and acoustic amplifiers. Standard ultrasound medical devices may also be used in some applications. Measurement of the acoustics used in sonoporation is listed in terms of mechanical index, which quantifies the likelihood that exposure to diagnostic ultrasound will produce an adverse biological effect by a non-thermal action based on pressure. Microbubble contrast agents Microbubble contrast agents are generally used in contrast-enhanced ultrasound applications to enhance the acoustic impact of ult
https://en.wikipedia.org/wiki/Solenoid%20%28DNA%29
The solenoid structure of chromatin is a model for the structure of the 30 nm fibre. It is a secondary chromatin structure which helps to package eukaryotic DNA into the nucleus. Background Chromatin was first discovered by Walther Flemming by using aniline dyes to stain it. In 1974, it was first proposed by Roger Kornberg that chromatin was based on a repeating unit of a histone octamer and around 200 base pairs of DNA. The solenoid model was first proposed by John Finch and Aaron Klug in 1976. They used electron microscopy images and X-ray diffraction patterns to determine their model of the structure. This was the first model to be proposed for the structure of the 30 nm fibre. Structure DNA in the nucleus is wrapped around nucleosomes, which are histone octamers formed of core histone proteins; two histone H2A-H2B dimers, two histone H3 proteins, and two histone H4 proteins. The primary chromatin structure, the least-packed form, is the 11 nm, or “beads on a string” form, where DNA is wrapped around nucleosomes at relatively regular intervals, as Roger Kornberg proposed. Histone H1 protein binds to the site where DNA enters and exits the nucleosome, wrapping 147 base pairs around the histone core and stabilising the nucleosome, this structure is a chromatosome. In the solenoid structure, the nucleosomes fold up and are stacked, forming a helix. They are connected by bent linker DNA which positions sequential nucleosomes adjacent to one another in the helix. The nucleosomes are positioned with the histone H1 proteins facing toward the centre where they form a polymer. Finch and Klug determined that the helical structure had only one-start point because they mostly observed small pitch angles of 11 nm, which is about the same diameter as a nucleosome. There are approximately 6 nucleosomes in each turn of the helix. Finch and Klug actually observed a wide range of nucleosomes per turn but they put this down to flattening. Finch and Klug's electron microscopy
https://en.wikipedia.org/wiki/Deferrisoma
Deferrisoma is a genus of bacteria from the phylum Thermodesulfobacteriota. See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Predicta
The Philco Predicta is a black and white television chassis style, which was made in several cabinet models with 17” or 21” screens by the American company Philco from 1958 to 1960. The Predicta was marketed as the world’s first swivel-screen television. Designed by Catherine Winkler, Severin Jonassen and Richard Whipple, it featured a picture tube (CRT) that separated from the rest of the cabinet. The safety mask on the front of the picture tube was made with a new organic plastic product by Eastman Plastics called “tenite”, which protected the glass and provided implosion protection for the user and produced a greenish tint. The Predicta also had a thinner picture tube than many other televisions at the time, which led it to be marketed as the more futuristic television set. Predicta television sets were constructed with a variety of cabinet configurations, some detachable but all separate from the tube itself and connected by wires. As its manufacturer explained in mid-1959, “The world’s first separate screen receiver, Philco’s ‘Predicta,’ marked a revolution in the design and engineering of television sets. Announced in June 1958, ‘Predicta’ was made possible by the development of a shorter 21-inch picture tube called the “SF” tube (for ‘semi-flat’), and a newly-designed contour chassis....’Slender Seventeener’ portables in the ‘Predicta’ line are the thinnest and most compact portables on the market today.” The Predicta Tandem model had a fully detached picture tube and an umbilical cable, which allowed the controls and speaker for the set to be next to the viewer, with the screen up to 25 feet away. Also unique to this version was a large handle over the top to carry the cathode ray tube portion wherever the viewer wanted it. This version also required more internal circuitry to drive the video signal through the cable. Philco also made Directa, a short-lived remote series in 1959 before the firm was bought by the Ford Motor Company in 1961. This set f
https://en.wikipedia.org/wiki/Interval%20class
In musical set theory, an interval class (often abbreviated: ic), also known as unordered pitch-class interval, interval distance, undirected interval, or "(even completely incorrectly) as 'interval mod 6'" (; ), is the shortest distance in pitch class space between two unordered pitch classes. For example, the interval class between pitch classes 4 and 9 is 5 because 9 − 4 = 5 is less than 4 − 9 = −5 ≡ 7 (mod 12). See modular arithmetic for more on modulo 12. The largest interval class is 6 since any greater interval n may be reduced to 12 − n. Use of interval classes The concept of interval class accounts for octave, enharmonic, and inversional equivalency. Consider, for instance, the following passage: (To hear a MIDI realization, click the following: In the example above, all four labeled pitch-pairs, or dyads, share a common "intervallic color." In atonal theory, this similarity is denoted by interval class—ic 5, in this case. Tonal theory, however, classifies the four intervals differently: interval 1 as perfect fifth; 2, perfect twelfth; 3, diminished sixth; and 4, perfect fourth. Notation of interval classes The unordered pitch class interval i(a, b) may be defined as where i is an ordered pitch-class interval . While notating unordered intervals with parentheses, as in the example directly above, is perhaps the standard, some theorists, including Robert , prefer to use braces, as in i{a, b}. Both notations are considered acceptable. Table of interval class equivalencies See also Pitch interval Similarity relation Sources Further reading Friedmann, Michael (1990). Ear Training for Twentieth-Century Music. New Haven: Yale University Press. (cloth) (pbk) Musical set theory
https://en.wikipedia.org/wiki/Phase%20detector%20characteristic
A phase detector characteristic is a function of phase difference describing the output of the phase detector. For the analysis of Phase detector it is usually considered the models of PD in signal (time) domain and phase-frequency domain. In this case for constructing of an adequate nonlinear mathematical model of PD in phase-frequency domain it is necessary to find the characteristic of phase detector. The inputs of PD are high-frequency signals and the output contains a low-frequency error correction signal, corresponding to a phase difference of input signals. For the suppression of high-frequency component of the output of PD (if such component exists) a low-pass filter is applied. The characteristic of PD is the dependence of the signal at the output of PD (in the phase-frequency domain) on the difference of phases at the input of PD. This characteristic of PD depends on the realization of PD and the types of waveforms of signals. Consideration of PD characteristic allows to apply averaging methods for high frequency oscillations and to pass from analysis and simulation of non autonomous models of phase synchronization systems in time domain to analysis and simulation of autonomous dynamical models in phase-frequency domain . Analog multiplier phase detector characteristic Consider a classical phase detector implemented with analog multiplier and low-pass filter. Here and denote high-frequency signals, piecewise differentiable functions , represent waveforms of input signals, denote phases, and denotes the output of the filter. If and satisfy the high frequency conditions (see ) then phase detector characteristic is calculated in such a way that time-domain model filter output and filter output for phase-frequency domain model are almost equal: Sine waveforms case Consider a simple case of harmonic waveforms and integration filter. Standard engineering assumption is that the filter removes the upper sideband from the input but leaves t
https://en.wikipedia.org/wiki/Tripartite%20symbiosis
Tripartite symbiosis is a type of symbiosis involving three species. This can include any combination of plants, animals, fungi, bacteria, or archaea, often in interkingdom symbiosis. Ants Fungus-growing ants Ants of Attini cultivate fungi. Microfungi, specialized to be parasites of the fungus gardens, coevolved with them. Allomerus-Hirtella-Trimmatostroma Allomerus decemarticulatus ants use Trimmatostroma sp. to create structures within Hirtella physophora. The fungi are connected endophytically and actively transfer nitrogen. Lichen The mycobiont in a lichen can form a relationship with both cyanobacteria and green algae as photobionts concurrently. Legumes Rhizobia are nitrogen-fixating bacteria that form symbiotic relationships with legumes. Sometimes, this is aided by the presence of a fungal species. This is most effective in undistributed soil. The presence of mycorrhizae can improve the rhizobial-liquorice nutrient transfer in droughts. Soybeans in particular can improve their ability to withstand soil salinity with the presence of both rhizobium and mycorrhizae.
https://en.wikipedia.org/wiki/Fundamental%20ephemeris
A fundamental ephemeris of the Solar System is a model of the objects of the system in space, with all of their positions and motions accurately represented. It is intended to be a high-precision primary reference for prediction and observation of those positions and motions, and which provides a basis for further refinement of the model. It is generally not intended to cover the entire life of the Solar System; usually a short-duration time span, perhaps a few centuries, is represented to high accuracy. Some long ephemerides cover several millennia to medium accuracy. They are published by the Jet Propulsion Laboratory as Development Ephemeris. The latest releases include DE430 which covers planetary and lunar ephemeris from Dec 21, 1549 to Jan 25, 2650 with high precision and is intended for general use for modern time periods . DE431 was created to cover a longer time period Aug 15, -13200 to March 15, 17191 with slightly less precision for use with historic observations and far reaching forecasted positions. DE432 was released as a minor update to DE430 with improvements to the Pluto barycenter in support of the New Horizons mission. Description The set of physical laws and numerical constants used in the calculation of the ephemeris must be self-consistent and precisely specified. The ephemeris must be calculated strictly in accordance with this set, which represents the most current knowledge of all relevant physical forces and effects. Current fundamental ephemerides are typically released with exact descriptions of all mathematical models, methods of computation, observational data, and adjustment to the observations at the time of their announcement. This may not have been the case in the past, as fundamental ephemerides were then computed from a collection of methods derived over a span of decades by many researchers. The independent variable of the ephemeris is always time. In the case of the most current ephemerides, it is a relativistic coordinate t
https://en.wikipedia.org/wiki/Olfactory%20fatigue
Olfactory fatigue, also known as odor fatigue, olfactory adaptation, and noseblindness, is the temporary, normal inability to distinguish a particular odor after a prolonged exposure to that airborne compound. For example, when entering a restaurant initially the odor of food is often perceived as being very strong, but after time the awareness of the odor normally fades to the point where the smell is not perceptible or is much weaker. After leaving the area of high odor, the sensitivity is restored with time. Anosmia is the permanent loss of the sense of smell, and is different from olfactory fatigue. It is a term commonly used in wine tasting, where one loses the ability to smell and distinguish wine bouquet after sniffing at wine(s) continuously for an extended period of time. The term is also used in the study of indoor air quality, for example, in the perception of odors from people, tobacco, and cleaning agents. Since odor detection may be an indicator that exposure to certain chemicals is occurring, olfactory fatigue can also reduce one's awareness about chemical hazard exposure. Olfactory fatigue is an example of neural adaptation. The body becomes desensitized to stimuli to prevent the overloading of the nervous system, thus allowing it to respond to new stimuli that are 'out of the ordinary'. Mechanism Olfactory fatigue is the result of a negative, stabilizing feedback loop which lowers the olfactory neuron's sensitivity the longer it is stimulated by an odorant. The increase of Ca2+ ions in the olfactory neuron in response to stimulus both charges the transfer of information to the brain and activates a limiting system to prevent overstimulation. After olfactory neurons depolarize in response to an odorant, the G-protein mediated second messenger response activates adenylyl cyclase, increasing cyclic AMP (cAMP) concentration inside a cell, which then opens a cyclic nucleotide gated cation channel. The influx of Ca2+ ions through this channel trigg
https://en.wikipedia.org/wiki/Duality%20%28electricity%20and%20magnetism%29
In physics, the electromagnetic dual concept is based on the idea that, in the static case, electromagnetism has two separate facets: electric fields and magnetic fields. Expressions in one of these will have a directly analogous, or dual, expression in the other. The reason for this can ultimately be traced to special relativity, where applying the Lorentz transformation to the electric field will transform it into a magnetic field. These are special cases of duality in mathematics. The electric field () is the dual of the magnetic field (). The electric displacement field () is the dual of the magnetic flux density (). Faraday's law of induction is the dual of Ampère's circuital law. Gauss's law for electric field is the dual of Gauss's law for magnetism. The electric potential is the dual of the magnetic potential. Permittivity is the dual of permeability. Electrostriction is the dual of magnetostriction. Piezoelectricity is the dual of piezomagnetism. Ferroelectricity is the dual of ferromagnetism. An electrostatic motor is the dual of a magnetic motor; Electrets are the dual of permanent magnets; The Faraday effect is the dual of the Kerr effect; The Aharonov–Casher effect is the dual to the Aharonov–Bohm effect; The hypothetical magnetic monopole is the dual of electric charge. See also Maxwell's equations Duality (electrical circuits) List of dualities Electromagnetism Duality theories
https://en.wikipedia.org/wiki/Access%20modifiers
Access modifiers (or access specifiers) are keywords in object-oriented languages that set the accessibility of classes, methods, and other members. Access modifiers are a specific part of programming language syntax used to facilitate the encapsulation of components. In C++, there are only three access modifiers. C# extends the number of them to six, while Java has four access modifiers, but three keywords for this purpose. In Java, having no keyword before defaults to the package-private modifier. When the class is declared as public, it is accessible to other classes defined in the same package as well as those defined in other packages. This is the most commonly used specifier for classes. However, a class itself cannot be declared as private. If no access specifier is stated, the default access restrictions will be applied. The class will be accessible to other classes in the same package but will be inaccessible to classes outside the package. When we say that a class is inaccessible, it simply means that we cannot create an object of that class or declare a variable of that class type. The protected access specifier too cannot be applied to a class. Names of keywords C++ uses the three modifiers called public, protected, and private. C# has the modifiers public, protected ,internal, private, protected internal, private protected, and file. Java has public, package, protected, and private; package is the default, used if no other access modifier keyword is specified. The meaning of these modifiers may differ from one language to another. A comparison of the keywords, ordered from the most restrictive to the most open, and their meaning in these three languages follows. Their visibility ranges from the same class to the package where the class is defined to a general access permission. Below, the maximal access is written into the table. In Swift, there are five different access levels relative to both the source file in which the entity is defined and the
https://en.wikipedia.org/wiki/Brain%20mapping
Brain mapping is a set of neuroscience techniques predicated on the mapping of (biological) quantities or properties onto spatial representations of the (human or non-human) brain resulting in maps. According to the definition established in 2013 by Society for Brain Mapping and Therapeutics (SBMT), brain mapping is specifically defined, in summary, as the study of the anatomy and function of the brain and spinal cord through the use of imaging, immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering, neurophysiology and nanotechnology. Overview All neuroimaging is considered part of brain mapping. Brain mapping can be conceived as a higher form of neuroimaging, producing brain images supplemented by the result of additional (imaging or non-imaging) data processing or analysis, such as maps projecting (measures of) behavior onto brain regions (see fMRI). One such map, called a connectogram, depicts cortical regions around a circle, organized by lobes. Concentric circles within the ring represent various common neurological measurements, such as cortical thickness or curvature. In the center of the circles, lines representing white matter fibers illustrate the connections between cortical regions, weighted by fractional anisotropy and strength of connection. At higher resolutions brain maps are called connectomes. These maps incorporate individual neural connections in the brain and are often presented as wiring diagrams. Brain mapping techniques are constantly evolving, and rely on the development and refinement of image acquisition, representation, analysis, visualization and interpretation techniques. Functional and structural neuroimaging are at the core of the mapping aspect of brain mapping. Some scientists have criticized the brain image-based claims made in scientific journals and the popular press, like the discovery of "the part of the brain responsible" things like love or musical abilities or a specific memory.
https://en.wikipedia.org/wiki/Linear%20span
In mathematics, the linear span (also called the linear hull or just span) of a set of vectors (from a vector space), denoted , is defined as the set of all linear combinations of the vectors in . For example, two linearly independent vectors span a plane. The linear span can be characterized either as the intersection of all linear subspaces that contain , or as the smallest subspace containing . The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules. To express that a vector space is a linear span of a subset , one commonly uses the following phrases—either: spans , is a spanning set of , is spanned/generated by , or is a generator or generator set of . Definition Given a vector space over a field , the span of a set of vectors (not necessarily finite) is defined to be the intersection of all subspaces of that contain . is referred to as the subspace spanned by , or by the vectors in . Conversely, is called a spanning set of , and we say that spans . Alternatively, the span of may be defined as the set of all finite linear combinations of elements (vectors) of , which follows from the above definition. In the case of infinite , infinite linear combinations (i.e. where a combination may involve an infinite sum, assuming that such sums are defined somehow as in, say, a Banach space) are excluded by the definition; a generalization that allows these is not equivalent. Examples The real vector space has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of . Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, , 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent. The set } is not a spanning set of , since its span is the space of all vectors in whose last component is zero. That space is also spanne
https://en.wikipedia.org/wiki/Pentraxins
Pentraxins (PTX), also known as pentaxins, are an evolutionary conserved family of proteins characterised by containing a pentraxin protein domain. Proteins of the pentraxin family are involved in acute immunological responses. They are a class of pattern recognition receptors (PRRs). They are a superfamily of multifunctional conserved proteins, some of which are components of the humoral arm of innate immunity and behave as functional ancestors of antibodies (Abs). They are known as classical acute phase proteins (APP), known for over a century. Structure Pentraxins are characterised by calcium dependent ligand binding and a distinctive flattened β-jellyroll structure similar to that of the legume lectins. The name "pentraxin" is derived from the Greek word for five (, pente) and axle (axis) relating to the radial symmetry of five monomers forming a ring approximately 95Å across and 35Å deep observed in the first members of this family to be identified. The "short" pentraxins include Serum Amyloid P component (SAP) and C reactive protein (CRP). The "long" pentraxins include PTX3 (a cytokine modulated molecule) and several neuronal pentraxins. Family members Three of the principal members of the pentraxin family are serum proteins: namely, CRP, SAP, and hamster female protein (FP). PTX3 (or TSG-14) protein is a cytokine-induced protein that is homologous to CRPs and SAPs. C-reactive protein C-reactive protein is expressed during the acute phase response to tissue injury or inflammation in mammals. The protein resembles antibody and performs several functions associated with host defence: it promotes agglutination, bacterial capsular swelling and phagocytosis, and activates the classical complement pathway through its calcium-dependent binding to phosphocholine. CRPs have also been sequenced in an invertebrate, Limulus polyphemus (Atlantic horseshoe crab), where they are a normal constituent of the hemolymph. Pentraxin 3 Pentraxin 3 (PTX3) is an acute pha
https://en.wikipedia.org/wiki/Leucine
Leucine (symbol Leu or L) is an essential amino acid that is used in the biosynthesis of proteins. Leucine is an α-amino acid, meaning it contains an α-amino group (which is in the protonated −NH3+ form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain isobutyl group, making it a non-polar aliphatic amino acid. It is essential in humans, meaning the body cannot synthesize it: it must be obtained from the diet. Human dietary sources are foods that contain protein, such as meats, dairy products, soy products, and beans and other legumes. It is encoded by the codons UUA, UUG, CUU, CUC, CUA, and CUG. Like valine and isoleucine, leucine is a branched-chain amino acid. The primary metabolic end products of leucine metabolism are acetyl-CoA and acetoacetate; consequently, it is one of the two exclusively ketogenic amino acids, with lysine being the other. It is the most important ketogenic amino acid in humans. Leucine and β-hydroxy β-methylbutyric acid, a minor leucine metabolite, exhibit pharmacological activity in humans and have been demonstrated to promote protein biosynthesis via the phosphorylation of the mechanistic target of rapamycin (mTOR). Dietary leucine As a food additive, L-leucine has E number E641 and is classified as a flavor enhancer. Requirements The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For leucine, for adults 19 years and older, 42 mg/kg body weight/day. Sources Health effects As a dietary supplement, leucine has been found to slow the degradation of muscle tissue by increasing the synthesis of muscle proteins in aged rats. However, results of comparative studies are conflicted. Long-term leucine supplementation does not increase muscle mass or strength in healthy elderly men. More studies are needed, preferably ones based on an objective, random sa
https://en.wikipedia.org/wiki/Acoustic%20jar
An acoustic jar, also known by the Greek name echea (ηχεία, literally echoers), or sounding vases, are ceramic vessels found set into the walls, ceilings, and sometimes floors, of medieval churches. They are believed to have been intended to improve the sound of singing, and to have been inspired by the theories of Vitruvius. They were supposedly used in ancient Greek theaters to enhance the voices of performers, though no archaeological evidence has been found. Construction The vessels mentioned by Vitruvius in his De architectura are made of bronze and designed specifically for each unique theatre. They were placed in niches between the theatre's seats, specifically so that nothing was touching them. They used mathematical calculations to decide where they should be placed. "They should be set upside down, and be supported on the side facing the stage by wedges not less than half a foot high." They were typically made of bronze, but were could also be made of earthenware. History Classical Antiquity The use of tuned bronze vases set in niches to modify the acoustics in Greek and Roman theatre is described by the Roman writer Vitruvius. Vitruvius mentions the Roman general Lucius Mummius, who destroyed the city of Corinth and its theater. He then brought the remains of the building's bronze echeas back to Rome. After selling the fragments, Mummius used the money to make a dedicatory offering at the temple of Luna. No original examples survive from the ancient world. Middle Ages In the Middle Ages the idea of the acoustic vessel re-emerged. Examples have been found in around 200 churches, about half of them in France. The vessels vary greatly in shape and positioning, but, unlike those described by Vitruvius they are ceramic, and are enclosed within the fabric of building. The function of the jars was established with the discovery of a reference in the Chronicle of the Celestins of Metz. The chronicler recorded that, in 1432: on the vigil of the
https://en.wikipedia.org/wiki/Authors%20of%20Plant%20Names
Authors of Plant Names (Brummitt & Powell) by Richard Kenneth Brummitt and C. Emma Powell, 1992, is a print database of accepted standardized abbreviations used for citing the author who validly published the name of a taxon. The database is now maintained online at the International Plant Names Index. The book provides recommended abbreviations for authors' names that help to distinguish authors with the same surname when giving the full name of a taxon. It deals authors who validly published the name of a flowering plant, gymnosperm, fern, bryophyte, algae, fungi or fossil plants. Prior to its publication in 1992, many abbreviations for authors to be cited could be found in Taxonomic literature. A selective guide to botanical publications and collections with dates, commentaries and types. by F. A. Stafleu & R. F. Cowen, 1976–1988. The International Code of Nomenclature for algae, fungi, and plants (ICN) governs the naming of these organisms, and suggests that a taxon be fully identified by its name and the author, but does not require that abbreviations be used when citing an author for a taxon. When abbreviations are used, the ICN recommends that Brummitt & Powell's Authors of plant names (1992), and the websites, the International Plant Names Index and the Index Fungorum can be used to find "unambiguous" abbreviations. Brummitt & Powell may not be international in scope, and it may be missing abbreviations for authors who validly published taxa during some time spans. A full name, rather than an abbreviation, may also make it easier to locate the original publication for the taxon name.
https://en.wikipedia.org/wiki/WEAP
WEAP (the Water Evaluation and Planning system) is a model-building tool for water resource planning and policy analysis that is distributed at no charge to non-profit, academic, and governmental organizations in developing countries. WEAP can be used to create simulations of water demand, supply, runoff, evapotranspiration, water allocation, infiltration, crop irrigation requirements, instream flow requirements, ecosystem services, groundwater and surface storage, reservoir operations, pollution generation, treatment, discharge, and instream water quality. The simulations can be created under scenarios of varying policy, hydrology, climate, land use, technology, and socio-economic factors. WEAP links to the USGS MODFLOW groundwater flow model and the US EPA QUAL2K surface water quality model. WEAP was created in 1988 and continues to be developed and supported by the U.S. Center of the Stockholm Environment Institute, a non-profit research institute based at Tufts University in Somerville, Massachusetts. It is used for climate change vulnerability studies and adaptation planning and has been applied by researchers and planners in thousands of organizations worldwide. Establishing the ‘current accounts’ and building scenarios and evaluating the scenarios about criteria are the main WEAP applications in Simulation problems.
https://en.wikipedia.org/wiki/Interactive%20application%20security%20testing
Interactive application security testing (abbreviated as IAST) is a security testing method that detects software vulnerabilities by interaction with the program coupled with observation and sensors. The tool was launched by several application security companies. It is distinct from static application security testing, which does not interact with the program, and dynamic application security testing, which considers the program as a black box. It may be considered a mix of both.
https://en.wikipedia.org/wiki/Windows%20NT
Windows NT is a proprietary graphical operating system produced by Microsoft, the first version of which was released on July 27, 1993. It is a processor-independent, multiprocessing and multi-user operating system. The first version of Windows NT was Windows NT 3.1 and was produced for workstations and server computers. It was a commercially focused operating system intended to complement consumer versions of Windows that were based on MS-DOS (including Windows 1.0 through Windows 3.1x). Gradually, the Windows NT family was expanded into Microsoft's general-purpose operating system product line for all personal computers, deprecating the Windows 9x family. "NT" was formerly expanded to "New Technology" but no longer carries any specific meaning. Starting with Windows 2000, "NT" was removed from the product name and is only included in the product version string along with several low-level places within the system. In fact, NT was a trademark of Northern Telecom (later Nortel) at the time, which Microsoft was forced to acknowledge on the product packaging. NT was the first purely 32-bit version of Windows, whereas its consumer-oriented counterparts, Windows 3.1x and Windows 9x, were 16-bit/32-bit hybrids. It is a multi-architecture operating system. Initially, it supported several instruction set architectures, including IA-32, MIPS, and DEC Alpha; support for PowerPC, Itanium, x64, and ARM were added later. The latest versions support x86 (including IA-32 and x64) and ARM. Major features of the Windows NT family include Windows shell, Windows API, Native API, Active Directory, Group Policy, Hardware Abstraction Layer, NTFS, BitLocker, Windows Store, Windows Update, and Hyper-V. Versions of Windows NT are installed using Windows Setup, which, starting with Windows Vista, uses the Windows Preinstallation Environment, which is a lightweight version of Windows NT made for deployment of the operating system. Naming It has been suggested that Dave Cutler intended t
https://en.wikipedia.org/wiki/Erasure%20code
In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of k symbols into a longer message (code word) with n symbols such that the original message can be recovered from a subset of the n symbols. The fraction r = k/n is called the code rate. The fraction k’/k, where k’ denotes the number of symbols required for recovery, is called reception efficiency. Optimal erasure codes Optimal erasure codes have the property that any k out of the n code word symbols are sufficient to recover the original message (i.e., they have optimal reception efficiency). Optimal erasure codes are maximum distance separable codes (MDS codes). Parity check Parity check is the special case where n = k + 1. From a set of k values , a checksum is computed and appended to the k source values: The set of k + 1 values is now consistent with regard to the checksum. If one of these values, , is erased, it can be easily recovered by summing the remaining variables: Polynomial oversampling Example: Err-mail (k = 2) In the simple case where k = 2, redundancy symbols may be created by sampling different points along the line between the two original symbols. This is pictured with a simple example, called err-mail: Alice wants to send her telephone number (555629) to Bob using err-mail. Err-mail works just like e-mail, except About half of all the mail gets lost. Messages longer than 5 characters are illegal. It is very expensive (similar to air-mail). Instead of asking Bob to acknowledge the messages she sends, Alice devises the following scheme. She breaks her telephone number up into two parts a = 555, b = 629, and sends 2 messages – "A=555" and "B=629" – to Bob. She constructs a linear function, , in this case , such that and . She computes the values f(3), f(4), and f(5), and then transmits three redundant messages: "C=703", "D=777" and "E=851". Bob knows that the form of f(
https://en.wikipedia.org/wiki/Immunome
The immunome is the set of genes and proteins that constitute the immune system, excluding those that are widespread in other cell types, and not involved in the immune response itself. It is further defined as the set of peptides derived from the proteome that interact with the immune system. There are numerous ongoing efforts to characterize and sequence the immunomes of humans, mice, and elements of non-human primates. Typically, immunomes are studied using immunofluorescence microscopy to determine the presence and activity of immune-related enzymes and pathways. Practical applications for studying the immunome include vaccines, therapeutic proteins, and further treatment of other diseases. The study of the immunome falls under the field of immunomics. Etymology The word immunome is a portmanteau of the words "immune" and "chromosome." See omics for a further discussion. Efforts to characterize The exact size of the human immunome is currently unknown and has been a topic of study for decades. However, the amount of information it encodes is said to exceed the size of the human genome by several orders of magnitude due to, at least in part, somatic hypermutation and junctional diversity. There are several ongoing efforts to characterize the immunomes of humans and other species. One major effort, launched in 2016, is a collaborative project between The Human Vaccines Project, Vanderbilt University Medical Center, and Illumina, Inc. This project is entitled the Human Immunome Program and its goal is to decipher the complete collection of B and T immune cell receptors from the human population. Thousands of individuals will need to be studied in order to meet this goal, and they will need to represent different ages, genders, ethnicities, and geographical origins. Furthermore, people with diseases and people who have undergone vaccination will need to be studied as well. The results of the program will be shared as an open-sourced database. The sequencing proj
https://en.wikipedia.org/wiki/Membrane%20channel
Membrane channels are a family of biological membrane proteins which allow the passive movement of ions (ion channels), water (aquaporins) or other solutes to passively pass through the membrane down their electrochemical gradient. They are studied using a range of channelomics experimental and mathematical techniques. Insights have suggested endocannabinoids (eCBs) as molecules that can regulate the opening of these channels during diverse conditions. Properties Hemichannels A hemichannel is a membrane channel made up of six subunits. A hemichannel is defined as one-half of a gap junction channel. Hemichannels consist of connexins. Pannexin Pannexins are involved in the process of purinergic signalling. They release adenosine triphosphate (ATP), which activate purinergic receptors. On the other hand, purinergic receptor activation can also lead to the opening of the channel, via a positive feedback loop. In addition, P2Y receptors activate inositol trisphosphate, which leads to a transient increase in intracellular calcium, and opens both connexin and pannexin channels, therefore contributing to the propagation of calcium waves across astrocytes and epithelial cells.
https://en.wikipedia.org/wiki/Computer%20bureau
A computer bureau is a service bureau providing computer services. Computer bureaus developed during the early 1960s, following the development of time-sharing operating systems. These allowed the services of a single large and expensive mainframe computer to be divided up and sold as a fungible commodity. Development of telecommunications and the first modems encouraged the growth of computer bureau as they allowed immediate access to the computer facilities from a customer's own premises. The computer bureau model shrank during the 1980s, as cheap commodity computers, particularly the PC clone but also the minicomputer allowed services to be hosted on-premises. See also Batch processing Cloud computing Grid computing Service Bureau Corporation Utility computing
https://en.wikipedia.org/wiki/Google%20Cloud%20Storage
Google Cloud Storage is a RESTful online file storage web service for storing and accessing data on Google Cloud Platform infrastructure. The service combines the performance and scalability of Google's cloud with advanced security and sharing capabilities. It is an Infrastructure as a Service (IaaS), comparable to Amazon S3. Contrary to Google Drive and according to different service specifications, Google Cloud Storage appears to be more suitable for enterprises. Feasibility User activation is resourced through the API Developer Console. Google Account holders must first access the service by logging in and then agreeing to the Terms of Service, followed by enabling a billing structure. Design Google Cloud Storage stores objects (originally limited to 100 GiB, currently up to 5 TiB) in projects which are organized into buckets. All requests are authorized using Identity and Access Management policies or access control lists associated with a user or service account. Bucket names and keys are chosen so that objects are addressable using HTTP URLs: https://storage.googleapis.com/bucket/object http://bucket.storage.googleapis.com/object https://storage.cloud.google.com/bucket/object Features Google Cloud Storage offers four storage classes, identical in throughput, latency and durability. The four classes, Multi-Regional Storage, Regional Storage, Nearline Storage, and Coldline Storage, differ in their pricing, minimum storage durations, and availability. Interoperability - Google Cloud Storage is interoperable with other cloud storage tools and libraries that work with services such as Amazon S3 and Eucalyptus Systems. Consistency - Upload operations to Google Cloud Storage are atomic, providing strong read-after-write consistency for all upload operations. Access Control - Google Cloud Storage uses access control lists (ACLs) to manage object and bucket access. An ACL consists of one or more entries, each granting a specific permission to a scope. Perm
https://en.wikipedia.org/wiki/Liquid%20smoke
Liquid smoke is a water-soluble yellow to red liquid used as a flavoring as a substitute for cooking with wood smoke while retaining a similar flavor. It can be used to flavor any meat or vegetable. It is available as pure condensed smoke from various types of wood, and as derivative formulas containing additives. History Pyrolysis or thermal decomposition of wood in a low oxygen manner originated prehistorically to produce charcoal. Condensates of the vapors eventually were made and found useful as preservatives. For centuries, water-based condensates of wood smoke were popularly called "wood vinegar", presumably due to its use as food vinegar. Pliny the Elder recorded in one of his ten volumes of Natural History the use of wood vinegar as an embalming agent, declaring it superior to other treatments he used. In 1658, Johann Rudolf Glauber outlined the methods to produce wood vinegar during charcoal making. Further, he described the use of the water insoluble tar fraction as a wood preservative and documented the freezing of the wood vinegar to concentrate it. Use of the term "pyroligneous acid" for wood vinegar emerged by 1788. In the United States, in 1895, E. H. Wright inaugurated the era of commercial distribution of pyroligneous acid under a new name, liquid smoke. Among Wright's innovations were the standardization of the product, marketing and distribution. Wright's Liquid Smoke, since 1997 owned by B&G Foods, and its modern-day successors have always been the subject of controversy about their contents and production, but in 1913, Wright prevailed in a federal misbranding case. Case judge Van Valkenburg wrote: Historically, all pyroligneous acid products, Wright's product and many other condensates have been made as byproducts of charcoal manufacturing, which was of greater value. Chemicals such as methanol, acetic acid and acetone have been isolated from these condensates and sold. With the advent of lower cost fossil fuel sources, today these and othe
https://en.wikipedia.org/wiki/Eventual%20consistency
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventual consistency, also called optimistic replication, is widely deployed in distributed systems and has origins in early mobile computing projects. A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence. Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent. Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability). In chemistry, a base is the opposite of an acid, which helps in remembering the acronym. According to the same resource, these are the rough definitions of each term in BASE: Basically available: reading and writing operations are available as much as possible (using all nodes of a database cluster), but might not be consistent (the write might not persist after conflicts are reconciled, and the read might not get the latest write) Soft-state: without consistency guarantees, after some amount of time, we only have some probability of knowing the state, since it might not yet have converged Eventually consistent: If we execute some writes and then the system functions long enough, we can know the state of the data; any further reads of that data item will return the same value Eventual consistency is sometimes criticized as increasing the complexity of distributed software applications. This is partly because eventual consistency is purely a liveness guarantee (reads eventually return the same value) and does not guarantee safety: an eventually consistent system can return any value before it converges. Conflic
https://en.wikipedia.org/wiki/Package%20manager
A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer in a consistent manner. A package manager deals with packages, distributions of software and data in archive files. Packages contain metadata, such as the software's name, description of its purpose, version number, vendor, checksum (preferably a cryptographic hash function), and a list of dependencies necessary for the software to run properly. Upon installation, metadata is stored in a local package database. Package managers typically maintain a database of software dependencies and version information to prevent software mismatches and missing prerequisites. They work closely with software repositories, binary repository managers, and app stores. Package managers are designed to eliminate the need for manual installs and updates. This can be particularly useful for large enterprises whose operating systems typically consist of hundreds or even tens of thousands of distinct software packages. History An early package manager was SMIT (and its backend installp) from IBM AIX. SMIT was introduced with AIX 3.0 in 1989. Early package managers, from around 1994, had no automatic dependency resolution but could already drastically simplify the process of adding and removing software from a running system. By around 1995, beginning with CPAN, package managers began doing the work of downloading packages from a repository, automatically resolving its dependencies and installing them as needed, making it much easier to install, uninstall and update software from a system. Functions A software package is an archive file containing a computer program as well as necessary metadata for its deployment. The computer program can be in source code that has to be compiled and built first. Package metadata include package description, package version, and dependencies (other packages t
https://en.wikipedia.org/wiki/Release%20early%2C%20release%20often
Release early, release often (also known as ship early, ship often, or time-based releases, and sometimes abbreviated RERO) is a software development philosophy that emphasizes the importance of early and frequent releases in creating a tight feedback loop between developers and testers or users, contrary to a feature-based release strategy. Advocates argue that this allows the software development to progress faster, enables the user to help define what the software will become, better conforms to the users' requirements for the software, and ultimately results in higher quality software. The development philosophy attempts to eliminate the risk of creating software that no one will use. This philosophy was popularized by Eric S. Raymond in his 1997 essay The Cathedral and the Bazaar, where Raymond stated "Release early. Release often. And listen to your customers". This philosophy was originally applied to the development of the Linux kernel and other open-source software, but has also been applied to closed source, commercial software development. The alternative to the release early, release often philosophy is aiming to provide only polished, bug-free releases. Advocates of RERO question that this would in fact result in higher-quality releases. See also Worse is better Programming paradigm Software development process Agile software development Minimum viable product Vote early and vote often
https://en.wikipedia.org/wiki/Header%20check%20sequence
A header check sequence (HCS) is an error checking feature for various header data structures, such as in the media access control (MAC) header of Ethernet. It may consist of a cyclic redundancy check (CRC) of the frame, obtained as the remainder of the division (modulo 2) by the generator polynomial multiplied by the content of the header excluding the HCS field. The HCS can be one octet long, as in WiMAX, or a 16-bit value for cable modems. See also Checksum
https://en.wikipedia.org/wiki/Cofactor%20engineering
Cofactor engineering, a subset of metabolic engineering, is defined as the manipulation of the use of cofactors in an organism’s metabolic pathways. In cofactor engineering, the concentrations of cofactors are changed in order to maximize or minimize metabolic fluxes. This type of engineering can be used to optimize the production of a metabolite product or to increase the efficiency of a metabolic network. The use of engineering single celled organisms to create lucrative chemicals from cheap raw materials is growing, and cofactor engineering can play a crucial role in maximizing production. The field has gained more popularity in the past decade and has several practical applications in chemical manufacturing, bioengineering and pharmaceutical industries. Cofactors are non-protein compounds that bind to proteins and are required for the proteins normal catalytic functionality. Cofactors can be considered “helper molecules” in biological activity, and often affect the functionality of enzymes. Cofactors can be both organic and inorganic compounds. Some examples of inorganic cofactors are iron or magnesium, and some examples of organic cofactors include ATP or coenzyme A. Organic cofactors are more specifically known as coenzymes, and many enzymes require the addition of coenzymes to assume normal catalytic function in a metabolic reaction. The coenzymes bind to the active site of an enzyme to promote catalysis. By engineering cofactors and coenzymes, a naturally occurring metabolic reaction can be manipulated to optimize the output of a metabolic network. Background Cofactors were discovered by Arthur Harden and William Young in 1906, when they found that the rate of alcoholic fermentation in unboiled yeast extracts increased when boiled yeast extract was added. A few years after, Hans von Euler-Chelpin identified the cofactor in the boiled extract as NAD+. Other cofactors, such as ATP and coenzyme A, were discovered later in the 1900s. The mechani
https://en.wikipedia.org/wiki/Tom%20Douglas%20Spies
Dr. Tom Douglas Spies (September 21, 1902 in Ravenna, Texas – February 28, 1960 in New York City) was a distinguished American physician and medical educator. He was an authority in the study of nutritional diseases. In the 1930s, he contributed significantly to finding a cure for pellagra, a nutritional disease that once afflicted millions in the American South. Later, he also made a large contribution to finding cure for tropical sprue. For his efforts in elimination of pellagra, Time Magazine named him as 1938 "Man of the Year" in comprehensive science. Education A member of Phi Beta Kappa, Spies received a B.A. degree from the University of Texas in 1923 and an M.D. from Harvard in 1927. He spent the next two years in pathology in Boston hospitals and then went to Western Reserve University to become an instructor in medicine until 1935. Work Spies became assistant professor of medicine at the University of Cincinnati's College of Medicine (1935–1947). After 1947, he became an instructor at Northwestern University Medical School. Nutrition clinic Spies was best known as a director of Nutrition Clinic at the Hillman Hospital, Birmingham, Alabama, after 1936. He was invited to come to Birmingham in 1935 by James S. McLester, physician-in-chief of the Hillman Hospital, who was then also the President of the American Medical Association. In 1945, he and six social workers, including Martha Hutchinson, studied the effects of daily supplementation of milk on the growth and development of malnourished children. Other contributions Spies was appointed to the Food and Nutrition Board of National Research Council in 1943, and was a consultant on tropical medicine at Washington's Army Medical School, 1945. He labored with unremitting zeal to put thiamine, nicotinic acid, riboflavin, folic acid, vitamin B12 and thymine (5-methyl uracil) to use in clinical and preventive medicine. In the late 1940s, Spies experimented with the use of folic acid and other vitamins in
https://en.wikipedia.org/wiki/Pulmonary%20wedge%20pressure
The pulmonary wedge pressure (PWP) (also called pulmonary arterial wedge pressure (PAWP), pulmonary capillary wedge pressure (PCWP), pulmonary artery occlusion pressure (PAOP), or cross-sectional pressure) is the pressure measured by wedging a pulmonary artery catheter with an inflated balloon into a small pulmonary arterial branch. It estimates the left atrial pressure. Pulmonary venous wedge pressure (PVWP) is not synonymous with the above; PVWP has been shown to correlate with pulmonary artery pressures in studies, albeit unreliably. Physiologically, distinctions can be drawn among pulmonary artery pressure, pulmonary capillary wedge pressure, pulmonary venous pressure and left atrial pressure, but not all of these can be measured in a clinical context. Noninvasive estimation techniques have been proposed. Clinical significance Because of the large compliance of pulmonary circulation, it provides an indirect measure of the left atrial pressure. For example, it is considered the gold standard for determining the cause of acute pulmonary edema; this is likely to be present at a PWP of >20mmHg. It has also been used to diagnose severity of left ventricular failure and mitral stenosis, given that elevated pulmonary capillary wedge pressure strongly suggests failure of left ventricular output. Traditionally, it was believed that pulmonary edema with normal PWP suggested a diagnosis of acute respiratory distress syndrome (ARDS) or non cardiogenic pulmonary edema (as in opiate poisoning). However, since capillary hydrostatic pressure exceeds wedge pressure once the balloon is deflated (to promote a gradient for forward flow), a normal wedge pressure cannot conclusively differentiate between hydrostatic pulmonary edema and ARDS. Physiological pressure: 6–12 mm Hg.
https://en.wikipedia.org/wiki/Dinesh%20Singh%20%28academic%29
Professor Dinesh Singh, chancellor K.R. Mangalam University is an Indian professor of mathematics. He served as the 21st Vice-Chancellor of the University of Delhi, is a distinguished fellow of Hackspace at Imperial College London, and has been an adjunct professor of Mathematics at the University of Houston. For his services to the nation he was conferred with the Padma Shri which is the fourth highest civilian award awarded by the Republic of India. Early life and background Dinesh Singh earned his B.sc.(Hons. – Maths) in 1975 and M.A. (Maths) in 1977 from St. Stephen's College, followed by M.Phil (Maths) in 1978 from the University of Delhi. He did a PhD in Math from Imperial College London in 1981. He holds numerous honorary doctorates some of them being awarded by University of Edinburgh, National Institute of Technology, Kurukshetra, University College Cork, Ireland, and University of Houston. Career Singh started his career as Lecturer at St. Stephen's College, University of Delhi in 1981. Thereafter he joined the Department of Mathematics, University of Delhi in 1987. He was Head of the Department of Mathematics at the University of Delhi from December, 2004 to September, 2005. He served the University of Delhi as a Director, South Campus from 2005-2010. He officiated briefly as Pro Vice Chancellor, University of Delhi, before being appointed Vice Chancellor on 29 October 2010. His area of specialization includes Functional analysis, Operator Theory, and Harmonic analysis. He is an adjunct professor at the University of Houston and has also taught at the Indian Institute of Technology Delhi, Indian Statistical Institute, Delhi. He is a recipient of Padma Shri, the fourth highest civilian honor awarded by the Republic of India. He is noted for being instrumental in setting up of Cluster Innovation Centre at University of Delhi , an inter-disciplinary, first of its kind research center particularly promoting undergraduate research. He also populariz
https://en.wikipedia.org/wiki/Food%20Safety%20and%20Standards%20Authority%20of%20India
Food Safety and Standards Authority of India (FSSAI) is a statutory body established under the Ministry of Health & Family Welfare, Government of India. The FSSAI has been established under the Food Safety and Standards Act, 2006, which is a consolidating statute related to food safety and regulation in India. FSSAI is responsible for protecting and promoting public health through the regulation and supervision of food safety. The FSSAI is headed by a non-executive chairperson, appointed by the Central Government, either holding or has held the position of not below the rank of Secretary to the Government of India. Shri Sudhansh Pant, IAS is the current chairperson for FSSAI and Shri Ganji Kamala V Rao, IAS is the current chief executive officer for FSSAI. The FSSAI has its headquarters at New Delhi. The authority also has 4 regional offices located in Delhi, Mumbai, Kolkata, and Chennai. 22 referral laboratories notified by FSSAI, 72 State/UT laboratories located throughout India and 112 laboratories are NABL accredited private laboratories notified by FSSAI. In 2021, with the aim of benefitting industries involved in manufacturing, handling, packaging and selling of food items, FSSAI decided to grant perpetual licenses to restaurants and food manufacturers on the condition that they file their returns every year. Food Safety and Standards Authority of India License or Registration is required for any food business in India that manufactures, stores, transports, or distributes food. Depending on the size and nature of the company, FSSAI registration or license may be required. History FSSAI was established on 5 September 2008 under Food Safety and Standards Act, 2006 which was operationalized in year 2006. The FSSAI consists of a chairperson & 22 members. The FSSAI is responsible for setting standards for food so that there is one body to deal with and no confusion in the minds of consumers, traders, manufacturers, and investors. Ministry of Health & Family Wel
https://en.wikipedia.org/wiki/J.%20Anthony%20Hall
J. Anthony Hall FREng is a leading British software engineer specializing in the use of formal methods, especially the Z notation. Anthony Hall was educated at the University of Oxford with a BA in chemistry and a DPhil in theoretical chemistry. His subsequent posts have included: ICI Research Fellow, Department of Theoretical Chemistry, University of Sheffield (1971–1973) Principal Scientific Officer, British Museum Research Laboratory (1973–1980) Senior Consultant, Systems Programming Limited (1980–1984) Principal Consultant, Systems Designers (1984–1986) Visiting Professor, Carnegie Mellon University (1994) Principal Consultant, Praxis Critical Systems (1986–2004) In particular, Hall has worked on software development using formal methods for the UK National Air Traffic Services (NATS). He has been an invited speaker at conferences concerned with formal methods, requirements engineering and software engineering. Since 2004, Hall has been an independent consultant. He has also been a visiting professor at the University of York. Hall was the founding chair of ForTIA, the Formal Techniques Industry Association. Selected publications Anthony Hall, Seven Myths of Formal Methods, IEEE Software, September 1990, pp. 11–19. Anthony Hall and Roderick Chapman, Correctness by Construction: Developing a Commercial Secure System, IEEE Software, January/February 2002, pp. 18–25.
https://en.wikipedia.org/wiki/Asynchronous%20method%20invocation
In multithreaded computer programming, asynchronous method invocation (AMI), also known as asynchronous method calls or the asynchronous pattern is a design pattern in which the call site is not blocked while waiting for the called code to finish. Instead, the calling thread is notified when the reply arrives. Polling for a reply is an undesired option. Background AMI is a design pattern for asynchronous invocation of potentially long-running methods of an object. It is equivalent to the IOU ("I owe you") pattern described in 1996 by Allan Vermeulen. In most programming languages a called method is executed synchronously, i.e. in the thread of execution from which it is invoked. If the method takes a long time to complete, e.g. because it is loading data over the internet, the calling thread is blocked until the method has finished. When this is not desired, it is possible to start a "worker thread" and invoke the method from there. In most programming environments this requires many lines of code, especially if care is taken to avoid the overhead that may be caused by creating many threads. AMI solves this problem in that it augments a potentially long-running ("synchronous") object method with an "asynchronous" variant that returns immediately, along with additional methods that make it easy to receive notification of completion, or to wait for completion at a later time. One common use of AMI is in the active object design pattern. Alternatives are synchronous method invocation and future objects. An example for an application that may make use of AMI is a web browser that needs to display a web page even before all images are loaded. Since method is a special case of procedure, asynchronous method invocation is a special case of asynchronous procedure call. Implementations Java class FutureTask class in Java use events to solve the same problem. This pattern is a variant of AMI whose implementation carries more overhead, but it is useful for objects rep
https://en.wikipedia.org/wiki/Jean-Joseph%20Kapeller
Jean-Joseph Kapeller (24 July 1706 – 29 November 1790) was a French painter, architect and geometer. Born in Marseille he was influenced by Jean-Baptiste de La Rose and Joseph Vernet, mainly producing landscapes and seascapes such as his 1756 masterwork Embarcation of the Expeditionary Corps for Minorca at the Port of Marseille under the command of the Duke of Richelieu. He and his contemporary Charles François Lacroix de Marseille produced seascapes which marked a step-change in the appreciation of seascapes in Provence in the second half of the 18th century. Kapeller and Michel-François Dandré-Bardon co-founded Marseille's Académie de peinture et de sculpture, with Kapeller becoming its director-rector in 1771 and giving classes in drawing and gemotery there which were attended by his main pupil Henry d'Arles. Kapeller was also a major figure in freemasonry in the city, becoming grand master of the Chevaliers de l'Orient lodge. He also became rector of the third order Franciscans at the Récollets in 1745 and a member of a chapel of penitents. Famous in Marseille in his own time, he seems to have never become much known outside Provence and most of his works are now lost, though some now hang in public collections in Toulon and Marseille. Life Early life His father Jean-Georges had been born in Meilen, Zurich and married Marie-Anne Daignan in Marseille on 11 January 1701, the year before Jean-Joseph's birth. Jean-Georges was also a painter and seems to have been highly regarded by contemporary art critics, who referred to "the ardour of his zeal for everything which concerned the school, artists and matters of art". Jean-Georges died before 1723, possibly during the bout of plague which affected Marseille in 1723, according to Joseph Billioud. Jean-Joseph Kapeller married Anne-Marie Mouren on 24 January 1723 in the collegiate church of Saint-Martin. The couple had two children, Marie-Eugénie (called "widow Mullard" in Jean-Joseph's will of 1778) and Pierre-Paul
https://en.wikipedia.org/wiki/Sleep%20paralysis
Sleep paralysis is a state, during waking up or falling asleep, in which one is conscious but in a complete state of full-body paralysis. During an episode, one may hallucinate (hear, feel, or see things that are not there), which often results in fear. Episodes generally last no more than a few minutes. It can recur multiple times or occur as a single episode. The condition may occur in those who are otherwise healthy or those with narcolepsy, or it may run in families as a result of specific genetic changes. The condition can be triggered by sleep deprivation, psychological stress, or abnormal sleep cycles. The underlying mechanism is believed to involve a dysfunction in REM sleep. Lucid dreaming does not affect the chances of sleep paralysis but some lucid dreamers use this as a method of having a lucid dream. Diagnosis is based on a person's description. Other conditions that can present similarly include narcolepsy, atonic seizure, and hypokalemic periodic paralysis. Treatment options for sleep paralysis have been poorly studied. It is recommended that people be reassured that the condition is common and generally not serious. Other efforts that may be tried include sleep hygiene, cognitive behavioral therapy, and antidepressants. Between 8% and 50% of people experience sleep paralysis at some point during their life. About 5% of people have regular episodes. Males and females are affected equally. Sleep paralysis has been described throughout history. It is believed to have played a role in the creation of stories about alien abduction and other paranormal events. Symptoms and signs The main symptom of sleep paralysis is being unable to move or speak during awakening. Imagined sounds such as humming, hissing, static, zapping and buzzing noises are reported during sleep paralysis. Other sounds such as voices, whispers and roars are also experienced. It has also been known that one may feel pressure on their chest and intense pain in their head during an ep
https://en.wikipedia.org/wiki/Ergastic%20substance
Ergastic substances are non-protoplasmic materials found in cells. The living protoplasm of a cell is sometimes called the bioplasm and distinct from the ergastic substances of the cell. The latter are usually organic or inorganic substances that are products of metabolism, and include crystals, oil drops, gums, tannins, resins and other compounds that can aid the organism in defense, maintenance of cellular structure, or just substance storage. Ergastic substances may appear in the protoplasm, in vacuoles, or in the cell wall. Carbohydrates Reserve carbohydrate of plants are the derivatives of the end products of photosynthesis. Cellulose and starch are the main ergastic substances of plant cells. Cellulose is the chief component of the cell wall, and starch occurs as a reserve material in the protoplasm. Starch, as starch grains, arise almost exclusively in plastids, especially leucoplasts and amyloplasts. Proteins Although proteins are the main component of living protoplasm, proteins can occur as inactive, ergastic bodies—in an amorphous or crystalline (or crystalloid) form. A well-known amorphous ergastic protein is gluten. Fats and oils Fats (lipids) and oils are widely distributed in plant tissues. Substances related to fats—waxes, suberin, and cutin—occur as protective layers in or on the cell wall. Crystals Animals eliminate excess inorganic materials; plants mostly deposit such material in their tissues. Such mineral matter is mostly salts of calcium and anhydrides of silica. Raphides are a type of elongated crystalline form of calcium oxalate aggregated in bundles within a plant cell. Because of the needle-like form, large numbers in the tissue of, say, a leaf can render the leaf unpalatable to herbivores (see Dieffenbachia and taro). Druse Cystolith
https://en.wikipedia.org/wiki/International%20Cooperative%20Biodiversity%20Groups
International Cooperative Biodiversity Groups (or ICBG) is a program under National Institutes of Health, National Science Foundation and USAID established in 1993 to promote collaborative research between American universities and research institutions in countries that harbor unique genetic resource in the form of biodiversity—the practice known as bioprospecting. The basic aim of the program is to benefit both the host community and the global scientific community by discovering and researching the possibilities for new solutions to human health problems based on previously unexplored genetic resources. It therefore seeks to conserve biodiversity, and to foment, encourage and support sustainable practices of usage of biological resources. Groups are headed by a principal investigator who coordinates the efforts of the research consortium which often has branches in the US and the host country as well as in the countries of other third party institutions. There are currently International Cooperative Biodiversity groups operating in Latin America, Africa, Asia and Papua-New Guinea. The Maya ICBG, a group dedicated to collecting the ethnobiological knowledge of the Maya population of Chiapas, Mexico led by Dr. Brent Berlin was closed in 2001 after two years of funding after accusations of having failed to obtain prior informed consent.
https://en.wikipedia.org/wiki/Disposal%20of%20human%20corpses
Disposal of human corpses, also called final disposition, is the practice and process of dealing with the remains of a deceased human being. Disposal methods may need to account for the fact that soft tissue will decompose relatively rapidly, while the skeleton will remain intact for thousands of years under certain conditions. Several methods for disposal are practiced. A funeral is a ceremony that may accompany the final disposition. Regardless, the manner of disposal is often dominated by spirituality with a desire to hold vigil for the dead and may be highly ritualized. In cases of mass death, such as war and natural disaster, or in which the means of disposal are limited, practical concerns may be of greater priority. Ancient methods of disposing of dead bodies include cremation practiced by the Romans, Greeks, Hindus, and some Mayans; burial practiced by the Chinese, Japanese, Bali, Jews, Christians, and Muslims, as well as some Mayans; mummification, a type of embalming, practiced by the Ancient Egyptians; and the sky burial and a similar method of disposal called Tower of Silence practiced by Tibetan Buddhists, some Mongolians, and Zoroastrians. A modern method of quasi-final disposition, though still rare, is cryonics; this being putatively near-final, though nowhere close to demonstrated. Commonly practiced legal methods Some cultures place the dead in tombs of various sorts, either individually, or in specially designated tracts of land that house tombs. Burial in a graveyard is one common form of tomb. In some places, burials are impractical because the groundwater is too high; therefore tombs are placed above ground, as is the case in New Orleans, Louisiana, US. Elsewhere, a separate building for a tomb is usually reserved for the socially prominent and wealthy; grand, above-ground tombs are called mausoleums. The socially prominent sometimes had the privilege of having their corpses stored in church crypts. In more recent times, however, this has
https://en.wikipedia.org/wiki/ISO%2014971
ISO 14971 Medical devices — Application of risk management to medical devices is a voluntary standard for the application of risk management to medical devices. "Voluntary standards do not replace national laws, with which standards' users are understood to comply and which take precedence" over voluntary standards such as ISO 13485 and ISO 14971. The ISO Technical Committee responsible for the maintenance of this standard is ISO/ TC 210 working with IEC/SC62A through Joint Working Group one (JWG1). This standard is the culmination of the work starting in ISO/IEC Guide 51, and ISO/IEC Guide 63. The third edition of ISO 14971 was published in December 2019 and supersedes the second edition of ISO 14971. Specifically, ISO 14971 is a nine-part standard which first establishes a framework for risk analysis, evaluation, control, and review, and also specifies a procedure for review and monitoring during production and post-production. ISO 14971:2012 was harmonized with respect to the three European Directives associated with medical devices through the three 'Zed' Annexes (ZA, ZB & ZC). The Annex ZA harmonized ISO 14971:2012 with the Medical Devices Directive 93/42/EEC of 1993. The Annex ZB harmonized ISO 14971:2012 with the Active Implantable Medical Device Directive 90/385/EEC of 1990. The Annex ZC harmonized ISO 14971:2012 with the In-vitro Diagnostic Medical Device Directive 98/79/EC of 1998. The 2021 addendum to ISO 14971 (ISO 14971:2019+A11:2021) was published to harmonize ISO 14971 and two European Regulations associated with medical devices through the two 'Zed' Annexes (ZA & ZB). The Annex ZA harmonized ISO 14971 with the European Union's Medical Device Regulation (2017/745) of 2017. The Annex ZB harmonized ISO 14971 with the European Union's Medical Device Regulation (2017/746) of 2017. In 2013, a technical report ISO/TR 24971 was published by ISO TC 210 to provide expert guidance on the application of this standard. The second edition of ISO 24971 wa
https://en.wikipedia.org/wiki/Equiareal%20map
In differential geometry, an equiareal map, sometimes called an authalic map, is a smooth map from one surface to another that preserves the areas of figures. Properties If M and N are two Riemannian (or pseudo-Riemannian) surfaces, then an equiareal map f from M to N can be characterized by any of the following equivalent conditions: The surface area of f(U) is equal to the area of U for every open set U on M. The pullback of the area element μN on N is equal to μM, the area element on M. At each point p of M, and tangent vectors v and w to M at p,</p><p>where denotes the Euclidean wedge product of vectors and df denotes the pushforward along f. Example An example of an equiareal map, due to Archimedes of Syracuse, is the projection from the unit sphere to the unit cylinder outward from their common axis. An explicit formula is for (x, y, z) a point on the unit sphere. Linear transformations Every Euclidean isometry of the Euclidean plane is equiareal, but the converse is not true. In fact, shear mapping and squeeze mapping are counterexamples to the converse. Shear mapping takes a rectangle to a parallelogram of the same area. Written in matrix form, a shear mapping along the -axis is Squeeze mapping lengthens and contracts the sides of a rectangle in a reciprocal manner so that the area is preserved. Written in matrix form, with λ > 1 the squeeze reads A linear transformation multiplies areas by the absolute value of its determinant . Gaussian elimination shows that every equiareal linear transformation (rotations included) can be obtained by composing at most two shears along the axes, a squeeze and (if the determinant is negative), a reflection. In map projections In the context of geographic maps, a map projection is called equal-area, equivalent, authalic, equiareal, or area-preserving, if areas are preserved up to a constant factor; embedding the target map, usually considered a subset of R2, in the obvious way in R3, the requirement ab
https://en.wikipedia.org/wiki/Australian%20Square%20Kilometre%20Array%20Pathfinder
The Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Mid West region of Western Australia. The facility began as a technology demonstrator for the international Square Kilometre Array (SKA), an internationally planned radio telescope which will be larger and more sensitive. The ASKAP site has been selected as one of the SKA's two central locations. It is operated by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and forms part of the Australia Telescope National Facility. Construction commenced in late 2009 and first light was in October 2012. ASKAP consists of 36 identical parabolic antennas, each in diameter, working together as a single astronomical interferometer with a total collecting area of approximately . Each antenna is equipped with a phased-array feed (PAF), significantly increasing the field of view. This design provides both fast survey speed and high sensitivity. Description Development and construction of ASKAP was led by CSIRO Astronomy and Space Science (CASS), in collaboration with scientists and engineers in the Netherlands, Canada, and the US, as well as colleagues from Australian universities and industry partners in China. Design The construction and assembly of the dishes was completed in June 2012. ASKAP was designed as a synoptic telescope with a wide field-of-view, large spectral bandwidth, fast survey speed, and a large number of simultaneous baselines. The greatest technical challenge was the design and construction of the phased array feeds, which had not previously been used for radio astronomy, and so presented many new technical challenges, as well as the largest data rate so far encountered in a radio telescope. ASKAP is located in the Murchison district in Western Australia, a region that is extremely "radio-quiet" due to the low population density and resulting lack of radio interference (generated by h
https://en.wikipedia.org/wiki/Polyvinylcarbazole
Polyvinylcarbazole (PVK) is a temperature-resistant thermoplastic polymer produced by radical polymerization from the monomer N-vinylcarbazole. It is a photoconductive polymer and thus the basis for photorefractive polymers and organic light-emitting diodes. History Polyvinylcarbazole was discovered by the chemists Walter Reppe (1892-1969), Ernst Keyssner and Eugen Dorrer and patented by I.G. Farben in the USA in 1937. PVC was the first polymer whose photoconductivity was known. Starting in the 1960s, further polymers of this kind were sought. Production Polyvinylcarbazole is obtained from N-vinylcarbazole by radical polymerization in various ways. It can be produced by suspension polymerization at 180 °C with sodium chloride and potassium chromate as catalyst.  Alternatively, AIBN can also be used as a radical starter or a Ziegler-Natta catalyst. Properties Physical properties PVK can be used at temperatures of up to 160 - 170 °C and is therefore a temperature-resistant thermoplastic. The electrical conductivity changes depending on the illumination. For this reason, PVK is classified as a semiconductor or photoconductor. The polymer is extremely brittle, but the brittleness can be reduced by copolymerization with a little isoprene. Chemical properties Polyvinylcarbazole is soluble in aromatic hydrocarbons, halogenated hydrocarbons and ketones. It is resistant to acids, alkalis, polar solvents and aliphatic hydrocarbons. The addition of PVC to other plastic masses increases their temperature resistance. Use Due to its high price and special properties, the use of PVK is limited to special areas. It is used in insulation technology, electrophotography (e.g. in copiers and laser printers), for the fabrication of polymer photonic crystals, for organic light-emitting diodes and photovoltaic devices. In addition, PVC is a well researched component in photorefractive polymers and therefore plays an important role in holography. Another application is the prod
https://en.wikipedia.org/wiki/Playfair%27s%20axiom
In geometry, Playfair's axiom is an axiom that can be used instead of the fifth postulate of Euclid (the parallel postulate): In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point. It is equivalent to Euclid's parallel postulate in the context of Euclidean geometry and was named after the Scottish mathematician John Playfair. The "at most" clause is all that is needed since it can be proved from the first four axioms that at least one parallel line exists given a line L and a point P not on L, as follows: Construct a perpendicular: Using the axioms and previously established theorems, you can construct a line perpendicular to line L that passes through P. Construct another perpendicular: A second perpendicular line is drawn to the first one, starting from point P. Parallel Line: This second perpendicular line will be parallel to L by the definition of parallel lines (i.e the alternate interior angles are congruent as per the 4th axiom). The statement is often written with the phrase, "there is one and only one parallel". In Euclid's Elements, two lines are said to be parallel if they never meet and other characterizations of parallel lines are not used. This axiom is used not only in Euclidean geometry but also in the broader study of affine geometry where the concept of parallelism is central. In the affine geometry setting, the stronger form of Playfair's axiom (where "at most one" is replaced by "one and only one") is needed since the axioms of neutral geometry are not present to provide a proof of existence. Playfair's version of the axiom has become so popular that it is often referred to as Euclid's parallel axiom, even though it was not Euclid's version of the axiom. History Proclus (410–485 A.D.) clearly makes the statement in his commentary on Euclid I.31 (Book I, Proposition 31). In 1785 William Ludlam expressed the parallel axiom as follows: Two straight lines, meeting at a poi
https://en.wikipedia.org/wiki/Bitter%20taste%20evolution
The evolution of bitter taste receptors has been one of the most dynamic evolutionary adaptations to arise in multiple species. This phenomenon has been widely studied in the field of evolutionary biology because of its role in the identification of toxins often found on the leaves of inedible plants. A palate more sensitive to these bitter tastes would, theoretically, have an advantage over members of the population less sensitive to these poisonous substances because they would be much less likely to ingest toxic plants. Bitter-taste genes have been found in a variety of species, and the same genes have been well characterized in several common laboratory animals such as primates and mice, as well as in humans. The primary gene responsible for encoding this ability in humans is the TAS2R gene family which contains 25 functional loci as well as 11 pseudogenes. The development of this gene has been well characterized, with proof that the ability evolved before the human migration out of Africa. The gene continues to evolve in the present day. TAS2R The bitter taste receptor family, T2R (TAS2R), is encoded on chromosome 7 and chromosome 12. Genes on the same chromosome have shown remarkable similarity with each other, suggesting that the primary mutagenic forces in evolution of TAS2R are duplication events. These events have occurred in at least seven primate species: chimpanzee, human, gorilla, orangutan, rhesus macaque and baboon. The high variety among primate and rodent populations additionally suggests that, while selective constraint on these genes certainly exists, its effect is rather slight. Members of the T2R family encode alpha subunits of G-protein-coupled receptors, which are involved in intracellular taste transduction, not only on the taste buds but also in the pancreas and gastrointestinal tract. The mechanism of transduction is shown by exposure of the endocrine and gastrointestinal cells containing the receptors to bitter compounds, most famously
https://en.wikipedia.org/wiki/Adenochlaena
Adenochlaena is a genus of plant of the family Euphorbiaceae first described as a genus in 1858. It is native to certain islands in the Indian Ocean. Species Adenochlaena leucocephala Baill. - Madagascar, Comoros Adenochlaena zeylanica (Baill.) Thwaites - Sri Lanka formerly included moved to other genera (Cladogynos Epiprinus Koilodepas ) A. calycina - Koilodepas calycinum A. indica - Epiprinus mallotiformis A. mallotiformis - Epiprinus mallotiformis A. siamensis - Cladogynos orientalis A. siletensis - Epiprinus siletianus
https://en.wikipedia.org/wiki/Elasticity%20coefficient
The rate of a chemical reaction is influenced by many different factors, such as temperature, pH, reactant, and product concentrations and other effectors. The degree to which these factors change the reaction rate is described by the elasticity coefficient. This coefficient is defined as follows: where denotes the reaction rate and denotes the substrate concentration. Be aware that the notation will use lowercase roman letters, such as to indicate concentrations. The partial derivative in the definition indicates that the elasticity is measured with respect to changes in a factor S while keeping all other factors constant. The most common factors include substrates, products, and effectors. The scaling of the coefficient ensures that it is dimensionless and independent of the units used to measure the reaction rate and magnitude of the factor. The elasticity coefficient is an integral part of metabolic control analysis and was introduced in the early 1970s and possibly earlier by Henrik Kacser and Burns in Edinburgh and Heinrich and Rapoport in Berlin. The elasticity concept has also been described by other authors, most notably Savageau in Michigan and Clarke at Edmonton. In the late 1960s Michael Savageau developed an innovative approach called biochemical systems theory that uses power-law expansions to approximate the nonlinearities in biochemical kinetics. The theory is very similar to metabolic control analysis and has been very successfully and extensively used to study the properties of different feedback and other regulatory structures in cellular networks. The power-law expansions used in the analysis invoke coefficients called kinetic orders, which are equivalent to the elasticity coefficients. Bruce Clarke in the early 1970s, developed a sophisticated theory on analyzing the dynamic stability in chemical networks. As part of his analysis, Clarke also introduced the notion of kinetic orders and a power-law approximation that was somewhat simila