source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Dictyate
The dictyate or dictyotene is a prolonged resting phase in oogenesis. It occurs in the stage of meiotic prophase I in ootidogenesis. It starts late in fetal life and is terminated shortly before ovulation by the LH surge. Thus, although the majority of oocytes are produced in female fetuses before birth, these pre-eggs remain arrested in the dictyate stage until puberty commences and the cells complete ootidogenesis. In both mouse and human, oocyte DNA of older individuals has substantially more double-strand breaks than that of younger individuals. The dictyate appears to be an adaptation for efficiently removing damages in germ line DNA by homologous recombinational repair. Prophase arrested oocytes have a high capability for efficient repair of DNA damages. DNA repair capability appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility. Translation halt There are a lot of mRNAs that have been transcribed but not translated during dictyate. Shortly before ovulation, the oocyte of interest activates these mRNA strains. Biochemistry mechanism Translation of mRNA in dictyate is partly explained by molecules binding to sites on the mRNA strain, which results in that initiation factors of translation can not bind to that site. Two such molecules, that impedes initiation factors, are CPEB and maskin, which bind to CPE (cytoplasmic polyadenylation element). When these two molecules remain together, then maskin binds the initiation factor eIF-4E, and thus eIF4E can no longer interact with the other initiation factors and no translation occurs. On the other hand, dissolution of the CPEB/maskin complex leads to eIF-4E binding to the initiation factor eIF-4G, and thus translation starts, which contributes to the end of dictyate and further maturation of the oocyte. See also Oogenesis Immature ovum Embryo Zygote
https://en.wikipedia.org/wiki/Electronics%20%28magazine%29
Electronics is a discontinued American trade journal that covers the radio industry and subsequent industries from 1930 to 1995. Its first issue is dated April 1930. The periodical was published with the title Electronics until 1984, when it was changed temporarily to ElectronicsWeek, but was then reverted to the original title Electronics in 1985. The ISSN for the corresponding periods are: for the 1930–1984 issues, for the 1984–1985 issues with title ElectronicsWeek, and for the 1985–1995 issues. It was published by McGraw-Hill until 1988, when it was sold to the Dutch company VNU. VNU sold its American electronics magazines to Penton Publishing the next year. Generally a bimonthly magazine, its frequency and page count varied with the state of the industry, until its end in 1995. More than its principal rival Electronic News, it balanced its appeal to managerial and technical interests (at the time of its 1992 makeover, it described itself as a magazine for managers). The magazine is best known for publishing the April 19, 1965 article by Intel co-founder Gordon Moore, in which he outlined what came to be known as Moore's Law. Intel's hunt for Moore's original article On April 11, 2005, Intel posted a reward for an original, pristine copy of the Electronics Magazine where Moore's article was first published. The hunt was started in part because Moore lost his personal copy after loaning it out. Intel asked a favor of Silicon Valley neighbor and auction website eBay, having a notice posted on the website. Intel's spokesman explained, "We're kind of hopeful that it will start a bit of a scavenger hunt for the engineering community of Silicon Valley, and hopefully somebody has it tucked away in a box in the corner of their garage. We think it's an important piece of history, and we'd love to have an original copy." It soon became apparent to librarians that their copies of the article were in danger of being stolen, so many libraries (including Duke Universi
https://en.wikipedia.org/wiki/Square%E2%80%93cube%20law
The square–cube law (or cube–square law) is a mathematical principle, applied in a variety of scientific fields, which describes the relationship between the volume and the surface area as a shape's size increases or decreases. It was first described in 1638 by Galileo Galilei in his Two New Sciences as the "...ratio of two volumes is greater than the ratio of their surfaces". This principle states that, as a shape grows in size, its volume grows faster than its surface area. When applied to the real world, this principle has many implications which are important in fields ranging from mechanical engineering to biomechanics. It helps explain phenomena including why large mammals like elephants have a harder time cooling themselves than small ones like mice, and why building taller and taller skyscrapers is increasingly difficult. Description The square–cube law can be stated as follows: Represented mathematically: where is the original surface area and is the new surface area. where is the original volume, is the new volume, is the original length and is the new length. For example, a cube with a side length of 1 meter has a surface area of 6 m2 and a volume of 1 m3. If the sides of the cube were multiplied by 2, its surface area would be multiplied by the square of 2 and become 24 m2. Its volume would be multiplied by the cube of 2 and become 8 m3. The original cube (1 m sides) has a surface area to volume ratio of 6:1. The larger (2 m sides) cube has a surface area to volume ratio of (24/8) 3:1. As the dimensions increase, the volume will continue to grow faster than the surface area. Thus the square–cube law. This principle applies to all solids. Applications Engineering When a physical object maintains the same density and is scaled up, its volume and mass are increased by the cube of the multiplier while its surface area increases only by the square of the same multiplier. This would mean that when the larger version of the object is acceler
https://en.wikipedia.org/wiki/Ethylparaben
Ethylparaben (ethyl para-hydroxybenzoate) is the ethyl ester of p-hydroxybenzoic acid. Its formula is HO-C6H4-CO-O-CH2CH3. It is a member of the class of compounds known as parabens. It is used as an antifungal preservative. As a food additive, it has E number E214. Sodium ethyl para-hydroxybenzoate, the sodium salt of ethylparaben, has the same uses and is given the E number E215.
https://en.wikipedia.org/wiki/Mindat.org
Mindat.org is a non-commercial interactive online database covering minerals across the world. Originally created by Jolyon Ralph as a private project in 1993, it was launched as a community-editable website in October 2000. it is operated by the Hudson Institute of Mineralogy. History Mindat was started in 1993 as a personal database project by Jolyon Ralph. He then developed further versions as a Microsoft Windows application before launching a community-editable database website on 10 October 2000. After further development taking to the Internet stage, Mindat.org became an outreach program of the Hudson Institute of Mineralogy, a 501(c)(3) not-for-profit educational foundation incorporated in the state of New York. To address the increasing open data needs from individual researchers and organizations, Mindat.org has started to build and maintain an open data API for data query and access, and the efforts have received support from the National Science Foundation. Description Mindat claims to be the largest mineral database and mineralogical reference website on the Internet. It is used by professional mineralogists, geologists, and amateur mineral collectors alike, and is referenced in many publications. The database covers a variety of topics: scientific articles, field trip reports, mining history, advice for collectors, book reviews, mineral entries, localities, and photographs. Much of the information is from published literature, but registered editors may add and revise information and references. Editors are vetted for their expertise, in order to ensure accuracy. References have to be provided in the proper format, and editors own the copyright of data that they have contributed. The data is organized into mineral and locality pages, with links that allow for easy navigation among the pages. The pages about minerals include individual minerals and rocks. Naming conventions adhere to the various standards and definitions as published by the In
https://en.wikipedia.org/wiki/Brachiolaria
A brachiolaria is the second stage of larval development in many starfishes. It follows the bipinnaria. Brachiolaria have bilateral symmetry, unlike the adult starfish, which have a pentaradial symmetry. Starfish of the order Paxillosida (Astropecten and Asterina) have no brachiolaria stage, with the bipinnaria developing directly into an adult. The brachiolaria develops from the bipinnaria larva when the latter grows three short arms at the underside of its anterior end. These arms each bear sticky cells at the tip, and they surround an adhesive sucker. The larva soon sinks to the bottom, attaching itself to the substrate, firstly with the tips of the arms, and then with the sucker. Once attached, it begins to metamorphose into the adult form. The adult starfish develops only from the hind-part of the larva, away from the sucker. It is from this part that the arms of the adult grow, with the larval arms eventually degenerating and disappearing. The digestive system of the larva also degenerates, and is almost entirely rebuilt. A new mouth forming on the left side of the body, which eventually becomes the lower, or oral, surface of the adult. Similarly, a new anus forms on the right side, which becomes the upper, or aboral, surface. The coelom, or body cavity is divided into three chambers in the larva, two of which form the water vascular system, while the other remains as the adult body cavity. Once the tube feet develop from the water vascular system, the larva frees itself from the bottom. At around the same time, the skeleton begins to develop, initially in a ring around the anus; at this point the larva has developed into an adult, although it will continue to grow for some years before reaching sexual maturity.
https://en.wikipedia.org/wiki/Bipinnaria
A bipinnaria is the first stage in the larval development of most starfish, and is usually followed by a brachiolaria stage. Movement and feeding is accomplished by the bands of cilia. Starfish that brood their young generally lack a bipinnaria stage, with the eggs developing directly into miniature adults The bipinnaria is free-living, swimming as part of the zooplankton. When it initially forms, the entire body is covered by cilia, but as it grows, these become confined to a narrow band forming a number of loops over the body surface. A pair of short, stubby arms soon develop on the body, with the ciliated bands extending into them. In addition to propelling the larva through the water, the cilia also catch suspended food particles, and deliver them to the mouth (more correctly called a stomodeum). Eventually, three additional arms develop at the front end of the larva; at this point it becomes a brachiolaria. In some species, including the common starfish Asterias, the bipinnaria develops directly into an adult. It is very similar in appearance to the tornaria larvae of some Hemichordata, reflecting the descent of the Ambulacraria from a common ancestor.
https://en.wikipedia.org/wiki/R%C3%A9nyi%20entropy
In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors. Definition The Rényi entropy of order , where and , is defined as It is further defined at as Here, is a discrete random variable with possible outcomes in the set and corresponding probabilities for . The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e. If the probabilities are for all , then all the Rényi entropies of the distribution are equal: . In general, for all discrete random variables , is a non-increasing function in . Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities: . Here, the discrete probability distribution is interpreted as a vector in with and . The Rényi entropy for any is Schur concave. Special cases As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of . The lim
https://en.wikipedia.org/wiki/Frank%20Harary
Frank Harary (March 11, 1921 – January 4, 2005) was an American mathematician, who specialized in graph theory. He was widely recognized as one of the "fathers" of modern graph theory. Harary was a master of clear exposition and, together with his many doctoral students, he standardized the terminology of graphs. He broadened the reach of this field to include physics, psychology, sociology, and even anthropology. Gifted with a keen sense of humor, Harary challenged and entertained audiences at all levels of mathematical sophistication. A particular trick he employed was to turn theorems into games—for instance, students would try to add red edges to a graph on six vertices in order to create a red triangle, while another group of students tried to add edges to create a blue triangle (and each edge of the graph had to be either blue or red). Because of the theorem on friends and strangers, one team or the other would have to win. Biography Frank Harary was born in New York City, the oldest child to a family of Jewish immigrants from Syria and Russia. He earned his bachelor's and master's degrees from Brooklyn College in 1941 and 1945 respectively and his Ph.D., with supervisor Alfred L. Foster, from University of California, Berkeley in 1948. Prior to his teaching career he became a research assistant in the Institute of Social Research at the University of Michigan. Harary's first publication, "Atomic Boolean-like rings with finite radical", went through much effort to be put into the Duke Mathematical Journal in 1950. This article was first submitted to the American Mathematical Society in November 1948, then sent to the Duke Mathematical Journal where it was revised three times before it was finally published two years after its initial submission. Harary began his teaching career at the University of Michigan in 1953 where he was first an assistant professor, then in 1959 associate professor and in 1964 was appointed as a professor of mathematics, a pos
https://en.wikipedia.org/wiki/Small-signal%20model
Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point. Overview Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform. In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE. However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC beh
https://en.wikipedia.org/wiki/Solarsoft
Solarsoft is a collaborative software development system created at Lockheed-Martin to support solar data analysis and spacecraft operation activities. It is widely recognized in the solar physics community as having revolutionized solar data analysis starting in the early 1990s. Solarsoft is in active development and use by research groups on all seven continents. Solarsoft is a store-and-forward system that makes use of rsync, csh and other UNIX tools to distribute the software to a wide variety of platforms. Solarsoft predates CVS and most other collaborative development systems; hence, it does not provide direct support for many features that today would be considered necessary, such as software versioning. The use of Solarsoft has grown to include calibration data and even complete catalog indices for some instruments, as well as the scientific software. Most of the software in the Solarsoft tree pertains to either solar data analysis or specific space missions or observatories such as Yohkoh or SOHO. The vast majority is written in IDL, the most commonly used analysis platform in the solar physics community, though some C, ana, and PDL modules are also available. External links Solarsoft @ LMSAL Solarsoft @ NASA Physics software Lockheed Martin
https://en.wikipedia.org/wiki/Homaro%20Cantu
Homaro "Omar" Cantu Jr. (September 23, 1976 – April 14, 2015) was an American chef and inventor known for his use of molecular gastronomy. As a child, Cantu was fascinated with science and engineering. While working in a fast food restaurant, he discovered the similarities between science and cooking and decided to become a chef. In 1999, he was hired by his idol, Chicago chef Charlie Trotter. In 2003, Cantu became the first chef of Moto, which he later purchased. Through Moto, Cantu explored his unusual ideas about cooking including edible menus, carbonated fruit, and food cooked with a laser. Initially seen as a novelty only, Moto eventually earned critical praise and, in 2012, a Michelin star. Cantu's second restaurant, iNG, and his coffee house, Berrista, focused on the use of "miracle berries" to make sour food taste sweet. He was working on opening a brewery called Crooked Fork at the time of his suicide in 2015. In addition to being a chef, Cantu was a media personality, appearing regularly on TV shows, and an inventor. In 2010, he produced and co-hosted a show called Future Food. Through his media appearances, he advocated for an end to world hunger and thought his edible paper creation and the miracle berry could play a significant role in that goal. Cantu volunteered his time and money to a variety of charities and patented several food gadgets. Early life Cantu was born in Tacoma, Washington, on September 23, 1976. His father was a fabrication engineer and Cantu developed a passion for science and engineering at a young age. He disassembled the family lawn mower three times to learn how it worked, and his "Christmas gifts would wind up in a million pieces." A self-described problem child, Cantu grew up in Portland, Oregon. From the age of six to nine, he was homeless. He would later credit the homelessness for his inspiration to make food and become a social entrepreneur. At the age of twelve, Cantu was nearly jailed for starting a large fire near hi
https://en.wikipedia.org/wiki/Holy%20trinity%20%28cooking%29
The "holy trinity" in Cajun cuisine and Louisiana Creole cuisine is the base for several dishes in the regional cuisines of Louisiana and consists of onions, bell peppers and celery. The preparation of Cajun/Creole dishes such as crawfish étouffée, gumbo, and jambalaya all start from this base. Variants use garlic, parsley, or shallots in addition to the three trinity ingredients. The addition of garlic to the holy trinity is sometimes referred to as adding "the pope." The holy trinity is the Cajun and Louisiana Creole variant of mirepoix; traditional mirepoix is two parts onions, one part carrots, and one part celery, whereas the holy trinity is typically one or two parts onions, one part green bell pepper, and one part celery. It is also an evolution of the Spanish sofrito, which contains onion, garlic, bell peppers, and tomatoes. Origin of the name The name is an allusion to the Christian doctrine of the Trinity. The term is first attested in 1981 and was probably popularized by Paul Prudhomme. See also Mirepoix Sofrito Soffritto Epis
https://en.wikipedia.org/wiki/Branko%20Gr%C3%BCnbaum
Branko Grünbaum (; 2 October 1929 – 14 September 2018) was a Croatian-born mathematician of Jewish descent and a professor emeritus at the University of Washington in Seattle. He received his Ph.D. in 1957 from Hebrew University of Jerusalem in Israel. Life Grünbaum was born in Osijek, then part of the Kingdom of Yugoslavia, on 2 October 1929. His father was Jewish and his mother was Catholic, so during World War II the family survived the Holocaust by living at his Catholic grandmother's home. After the war, as a high school student, he met Zdenka Bienenstock, a Jew who had lived through the war hidden in a convent while the rest of her family were killed. Grünbaum became a student at the University of Zagreb, but grew disenchanted with the communist ideology of the Socialist Federal Republic of Yugoslavia, applied for emigration to Israel, and traveled with his family and Zdenka to Haifa in 1949. In Israel, Grünbaum found a job in Tel Aviv, but in 1950 returned to the study of mathematics, at the Hebrew University of Jerusalem. He earned a master's degree in 1954 and in the same year married Zdenka, who continued as a master's student in chemistry. He served a tour of duty as an operations researcher in the Israeli Air Force beginning in 1955, and he and Zdenka had the first of their two sons in 1956. He completed his Ph.D. in 1957; his dissertation concerned convex geometry and was supervised by Aryeh Dvoretzky. After finishing his military service in 1958, Grünbaum and his family came to the US so that Grünbaum could become a postdoctoral researcher at the Institute for Advanced Study. He then became a visiting researcher at the University of Washington in 1960. He agreed to return to Israel as a lecturer at the Hebrew University, but his plans were disrupted by the Israeli authorities determining that he was not a Jew (because his mother was not Jewish) and annulling his marriage; he and Zdenka remarried in Seattle before their return. Grünbaum remained a
https://en.wikipedia.org/wiki/Washington%20meridians
The Washington meridians are four meridians that were used as prime meridians in the United States which pass through Washington, D.C. The four that have been specified are: through the Capitol through the White House through the old Naval Observatory through the new Naval Observatory. Their longitudes may be reported in three ways: relative to the local vertical used by astronomic observations relative to NAD 27 (North American Datum 1927), an ellipsoid of revolution that is at mean sea level beneath triangulation station Meades Ranch, Kansas (not Earth-centered); relative to NAD 83, an Earth-centered ellipsoid of revolution with dimensions chosen to best fit the undulating (±100 m) geoid (world-wide mean sea level). NAD83 longitude of the Capitol is about 1.1 arc seconds less than its NAD27 longitude; astronomic longitude there is about 4 arc seconds less than NAD83. Capitol meridian Pierre (Peter) Charles L'Enfant specified the first meridian in his 1791 "Plan of the city intended for the permanent seat of the government of the United States . . ." (see: L'Enfant Plan). (Shortly after L'Enfant prepared this plan, its subject received the name "City of Washington".) His plan stated near its right side that the longitude of the Congress house, now called the Capitol, was . L'Enfant's plan contained the following explanatory note: In order to execute the above plan, Mr. Ellicott drew a true meridian line by celestial observation, which passes through the area intended for the Congress-House; this line he crossed by another line due east and west and which passes through the same area. These lines were accurately measured, and made the basis on which the whole plan was executed. He ran all the lines by a transit instrument, and determined the acute angles by actual measurement, and left nothing to the uncertainty of the compass. The longitude of the center of the Capitol's dome (completed in 1863 during the Civil War) is now given by the National Geodeti
https://en.wikipedia.org/wiki/Expressivity%20%28genetics%29
In genetics, expressivity is the degree to which a phenotype is expressed by individuals having a particular genotype. (Alternately, it may refer to the expression of particular gene by individuals having a certain phenotype.) Expressivity is related to the intensity of a given phenotype; it differs from penetrance, which refers to the proportion of individuals with a particular genotype that actually express the phenotype. Variable expressivity Variable expressivity refers to the degree in which a genotype is phenotypically expressed. For example, multiple people with the same disease can have the same genotype but one may express more severe symptoms, while another carrier may appear normal. This variation in expression can be affected by modifier genes, epigenetic factors or the environment. Modifier genes can alter the expression of other genes in either an additive or multiplicative way. Meaning the phenotype that is observed can be a result of two different alleles being summed or multiplied. However, a reduction in expression may also occur in which the primary locus, where the phenotype is expressed, is affected. Epigenetic factors, such as cis-regulatory elements, can also cause variability in expression by inducing variation in transcript abundance. Examples Three common syndromes that involved phenotypic variability due to expressivity include: Marfan syndrome, Van der Woude Syndrome, and neurofibromatosis. The characteristics of Marfan syndrome widely vary among individuals. The syndrome affects connective tissue in the body and has a spectrum of symptoms ranging from mild bone and joint involvement to severe neonatal forms and cardiovascular disease. This diversity in symptoms is a result of variable expressivity of the FBN1 gene found on chromosome 15 (see figure 1). The gene product is involved in the proper assembly of microfibrils. Van der Woude syndrome is a condition that affects the development of the face, specifically a cleft lip (see fi
https://en.wikipedia.org/wiki/Federigo%20Enriques
Abramo Giulio Umberto Federigo Enriques (5 January 1871 – 14 June 1946) was an Italian mathematician, now known principally as the first to give a classification of algebraic surfaces in birational geometry, and other contributions in algebraic geometry. Biography Enriques was born in Livorno, and brought up in Pisa, in a Sephardi Jewish family of Portuguese descent. His younger brother was zoologist Paolo Enriques who was also the father of Enzo Enriques Agnoletti and Anna Maria Enriques Agnoletti. He became a student of Guido Castelnuovo (who later became his brother-in-law after marrying his sister Elbina), and became an important member of the Italian school of algebraic geometry. He also worked on differential geometry. He collaborated with Castelnuovo, Corrado Segre and Francesco Severi. He had positions at the University of Bologna, and then the University of Rome La Sapienza. In 1931 he sworn allegiance to fascism, and in 1933 he became a member of the PNF. Despite this, he lost his position in 1938, when the Fascist government enacted the "leggi razziali" (racial laws), which in particular banned Jews from holding professorships in Universities. The Enriques classification, of complex algebraic surfaces up to birational equivalence, was into five main classes, and was background to further work until Kunihiko Kodaira reconsidered the matter in the 1950s. The largest class, in some sense, was that of surfaces of general type: those for which the consideration of differential forms provides linear systems that are large enough to make all the geometry visible. The work of the Italian school had provided enough insight to recognise the other main birational classes. Rational surfaces and more generally ruled surfaces (these include quadrics and cubic surfaces in projective 3-space) have the simplest geometry. Quartic surfaces in 3-spaces are now classified (when non-singular) as cases of K3 surfaces; the classical approach was to look at the Kummer surfaces
https://en.wikipedia.org/wiki/Negamax
Negamax search is a variant form of minimax search that relies on the zero-sum property of a two-player game. This algorithm relies on the fact that to simplify the implementation of the minimax algorithm. More precisely, the value of a position to player A in such a game is the negation of the value to player B. Thus, the player on move looks for a move that maximizes the negation of the value resulting from the move: this successor position must by definition have been valued by the opponent. The reasoning of the previous sentence works regardless of whether A or B is on move. This means that a single procedure can be used to value both positions. This is a coding simplification over minimax, which requires that A selects the move with the maximum-valued successor while B selects the move with the minimum-valued successor. It should not be confused with negascout, an algorithm to compute the minimax or negamax value quickly by clever use of alpha–beta pruning discovered in the 1980s. Note that alpha–beta pruning is itself a way to compute the minimax or negamax value of a position quickly by avoiding the search of certain uninteresting positions. Most adversarial search engines are coded using some form of negamax search. Negamax base algorithm NegaMax operates on the same game trees as those used with the minimax search algorithm. Each node and root node in the tree are game states (such as game board configuration) of a two player game. Transitions to child nodes represent moves available to a player who is about to play from a given node. The negamax search objective is to find the node score value for the player who is playing at the root node. The pseudocode below shows the negamax base algorithm, with a configurable limit for the maximum search depth: function negamax(node, depth, color) is if depth = 0 or node is a terminal node then return color × the heuristic value of node value := −∞ for each child of node do v
https://en.wikipedia.org/wiki/Naimark%27s%20problem
Naimark's problem is a question in functional analysis asked by . It asks whether every C*-algebra that has only one irreducible -representation up to unitary equivalence is isomorphic to the -algebra of compact operators on some (not necessarily separable) Hilbert space. The problem has been solved in the affirmative for special cases (specifically for separable and Type-I C*-algebras). used the -Principle to construct a C*-algebra with generators that serves as a counterexample to Naimark's Problem. More precisely, they showed that the existence of a counterexample generated by elements is independent of the axioms of Zermelo–Fraenkel set theory and the Axiom of Choice (). Whether Naimark's problem itself is independent of remains unknown. See also List of statements undecidable in Gelfand–Naimark Theorem
https://en.wikipedia.org/wiki/Barrelled%20space
In functional analysis and related areas of mathematics, a barrelled space (also written barreled space) is a topological vector space (TVS) for which every barrelled set in the space is a neighbourhood for the zero vector. A barrelled set or a barrel in a topological vector space is a set that is convex, balanced, absorbing, and closed. Barrelled spaces are studied because a form of the Banach–Steinhaus theorem still holds for them. Barrelled spaces were introduced by . Barrels A convex and balanced subset of a real or complex vector space is called a and it is said to be , , or . A or a in a topological vector space (TVS) is a subset that is a closed absorbing disk; that is, a barrel is a convex, balanced, closed, and absorbing subset. Every barrel must contain the origin. If and if is any subset of then is a convex, balanced, and absorbing set of if and only if this is all true of in for every -dimensional vector subspace thus if then the requirement that a barrel be a closed subset of is the only defining property that does not depend on (or lower)-dimensional vector subspaces of If is any TVS then every closed convex and balanced neighborhood of the origin is necessarily a barrel in (because every neighborhood of the origin is necessarily an absorbing subset). In fact, every locally convex topological vector space has a neighborhood basis at its origin consisting entirely of barrels. However, in general, there exist barrels that are not neighborhoods of the origin; "barrelled spaces" are exactly those TVSs in which every barrel is necessarily a neighborhood of the origin. Every finite dimensional topological vector space is a barrelled space so examples of barrels that are not neighborhoods of the origin can only be found in infinite dimensional spaces. Examples of barrels and non-barrels The closure of any convex, balanced, and absorbing subset is a barrel. This is because the closure of any convex (respectively, any balanced, any
https://en.wikipedia.org/wiki/Spencer%20Wells
Spencer Wells (born April 6, 1969) is an American geneticist, anthropologist, author and entrepreneur. He co-hosts The Insight podcast with Razib Khan. Wells led The Genographic Project from 2005 to 2015, as an Explorer-in-Residence at the National Geographic Society, and is the founder and executive director of personal genomics nonprofit The Insitome Institute. Biography Youth and education Wells was born in Marietta, Georgia and grew up in Lubbock, Texas. He attended both All Saints School and Lubbock High School, and received a National Merit Scholarship. He obtained a Bachelor of Science in biology from the University of Texas at Austin in 1988 and a Ph.D. in biology from Harvard University in 1994. He was a postdoctoral fellow at Stanford University between 1994 and 1998, and a research fellow at the University of Oxford from 1999 to 2000. Career Wells did his Ph.D. work under Richard Lewontin, and later did postdoctoral research with Luigi Luca Cavalli-Sforza and Sir Walter Bodmer. His work, which has helped to establish the critical role played by Central Asia in the peopling of the world, has been published in journals such as Science, American Journal of Human Genetics, and the Proceedings of the National Academy of Sciences. Wells is renowned for his logistically complex sample-collecting expeditions in remote parts of the world. EurAsia98, which in 1998 took him and his team from London to the Altai Mountains on the Mongolian border, via an overland route through the Caucasus, Iran and the -stans of Central Asia, was sponsored by Land Rover. In 2005 he led a team of Genographic scientists on the first modern expedition to the Tibesti Mountains in northern Chad, and in 2006 he led a team to the Wakhan Corridor on the Tajik-Afghan border. His work has taken him to more than 100 countries. He wrote the book The Journey of Man: A Genetic Odyssey (2002), which explains how genetic data has been used to trace human migrations over the past 50,000 ye
https://en.wikipedia.org/wiki/Barrelled%20set
In functional analysis, a subset of a topological vector space (TVS) is called a barrel or a barrelled set if it is closed convex balanced and absorbing. Barrelled sets play an important role in the definitions of several classes of topological vector spaces, such as barrelled spaces. Definitions Let be a topological vector space (TVS). A subset of is called a if it is closed convex balanced and absorbing in A subset of is called and a if it absorbs every bounded subset of Every bornivorous subset of is necessarily an absorbing subset of Let be a subset of a topological vector space If is a balanced absorbing subset of and if there exists a sequence of balanced absorbing subsets of such that for all then is called a in where moreover, is said to be a(n): if in addition every is a closed and bornivorous subset of for every if in addition every is a closed subset of for every if in addition every is a closed and bornivorous subset of for every In this case, is called a for Properties Note that every bornivorous ultrabarrel is an ultrabarrel and that every bornivorous suprabarrel is a suprabarrel. Examples In a semi normed vector space the closed unit ball is a barrel. Every locally convex topological vector space has a neighbourhood basis consisting of barrelled sets, although the space itself need not be a barreled space. See also
https://en.wikipedia.org/wiki/Online%20school
An online school (virtual school, e-school, or cyber-school) teaches students entirely or primarily online or through the Internet. It has been defined as "education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students. Online education exists all around the world and is used for all levels of education (K-12 High school/secondary school, college, or graduate school). This type of learning enables the individuals to earn transferable credits, take recognized examinations, and advance to the next level of education over the Internet. Virtual education is most commonly used in high school and college. 30-year-old students or older tend to study online programs at higher rates. This group represents 41% of the online education population, while 35.5% of students ages 24–29 and 24.5% of students ages 15–23 participate in virtual education. Virtual education is becoming increasingly used worldwide. There are currently more than 4,700 colleges and universities that provide online courses to their students. In 2015, more than 6 million students were taking at least one course online, this number grew by 3.9% from the previous year. 29.7% of all higher education students are taking at least one distance course. The total number of students studying on a campus exclusively dropped by 931,317 people between the years 2012 and 2015. Experts say that because the number of students studying at the college level is growing, there will also be an increase in the number of students enrolled in distance learning. Instructional models vary, ranging from distance learning types which provide study materials for independent self-paced study, to live, interactive classes where students communicate with a teacher in a class group lesson. Class sizes range widely from a small group of 6 pupils or students to hundreds in a virtual school. The courses that are i
https://en.wikipedia.org/wiki/Biometrika
Biometrika is a peer-reviewed scientific journal published by Oxford University Press for the Biometrika Trust. The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was established in 1901 and originally appeared quarterly. It changed to three issues per year in 1977 but returned to quarterly publication in 1992. History Biometrika was established in 1901 by Francis Galton, Karl Pearson, and Raphael Weldon to promote the study of biometrics. The history of Biometrika is covered by Cox (2001). The name of the journal was chosen by Pearson, but Francis Edgeworth insisted that it be spelt with a "k" and not a "c". Since the 1930s, it has been a journal for statistical theory and methodology. Galton's role in the journal was essentially that of a patron and the journal was run by Pearson and Weldon and after Weldon's death in 1906 by Pearson alone until he died in 1936. In the early days, the American biologists Charles Davenport and Raymond Pearl were nominally involved but they dropped out. On Pearson's death his son Egon Pearson became editor and remained in this position until 1966. David Cox was editor for the next 25 years. So, in its first 65 years Biometrika had effectively a total of just three editors, and in its first 90 years only four. Other people who were deeply involved in the journal included William Palin Elderton, an associate of Pearson's who published several articles in the early days and in 1935 became chairman of the Biometrika Trust. In the very first issue, the editors presented a clear statement of purpose: It is intended that Biometrika shall serve as a means not only of collecting or publishing under one title biological data of a kind not systematically collected or published elsewhere in any other periodical, but also of spreading a knowledge of such statistical theory as may be requisite for their scientific treatment. Its contents were to include: memoirs on va
https://en.wikipedia.org/wiki/Balanced%20set
In linear algebra and related areas of mathematics a balanced set, circled set or disk in a vector space (over a field with an absolute value function ) is a set such that for all scalars satisfying The balanced hull or balanced envelope of a set is the smallest balanced set containing The balanced core of a set is the largest balanced set contained in Balanced sets are ubiquitous in functional analysis because every neighborhood of the origin in every topological vector space (TVS) contains a balanced neighborhood of the origin and every convex neighborhood of the origin contains a balanced convex neighborhood of the origin (even if the TVS is not locally convex). This neighborhood can also be chosen to be an open set or, alternatively, a closed set. Definition Let be a vector space over the field of real or complex numbers. Notation If is a set, is a scalar, and then let and and for any let denote, respectively, the open ball and the closed ball of radius in the scalar field centered at where and Every balanced subset of the field is of the form or for some Balanced set A subset of is called a or balanced if it satisfies any of the following equivalent conditions: Definition: for all and all scalars satisfying for all scalars satisfying where For every is a (if ) or (if ) dimensional vector subspace of If then the above equality becomes which is exactly the previous condition for a set to be balanced. Thus, is balanced if and only if for every is a balanced set (according to any of the previous defining conditions). For every 1-dimensional vector subspace of is a balanced set (according to any defining condition other than this one). For every there exists some such that or If is a convex set then this list may be extended to include: for all scalars satisfying If then this list may be extended to include: is symmetric (meaning ) and Balanced hull The of a subset of denoted by is
https://en.wikipedia.org/wiki/Absolutely%20convex%20set
In mathematics, a subset C of a real or complex vector space is said to be absolutely convex or disked if it is convex and balanced (some people use the term "circled" instead of "balanced"), in which case it is called a disk. The disked hull or the absolute convex hull of a set is the intersection of all disks containing that set. Definition A subset of a real or complex vector space is called a and is said to be , , and if any of the following equivalent conditions is satisfied: is a convex and balanced set. for any scalars and if then for all scalars and if then for any scalars and if then for any scalars if then The smallest convex (respectively, balanced) subset of containing a given set is called the convex hull (respectively, the balanced hull) of that set and is denoted by (respectively, ). Similarly, the , the , and the of a set is defined to be the smallest disk (with respect to subset inclusion) containing The disked hull of will be denoted by or and it is equal to each of the following sets: which is the convex hull of the balanced hull of ; thus, In general, is possible, even in finite dimensional vector spaces. the intersection of all disks containing Sufficient conditions The intersection of arbitrarily many absolutely convex sets is again absolutely convex; however, unions of absolutely convex sets need not be absolutely convex anymore. If is a disk in then is absorbing in if and only if Properties If is an absorbing disk in a vector space then there exists an absorbing disk in such that If is a disk and and are scalars then and The absolutely convex hull of a bounded set in a locally convex topological vector space is again bounded. If is a bounded disk in a TVS and if is a sequence in then the partial sums are Cauchy, where for all In particular, if in addition is a sequentially complete subset of then this series converges in to some point of The convex balanced hull
https://en.wikipedia.org/wiki/Music%20Construction%20Set
Will Harvey's Music Construction Set (MCS) is a music composition notation program designed by Will Harvey for the Apple II and published by Electronic Arts in 1983. Harvey wrote the original Apple II version in assembly language when he was 15 and in high school. MCS was conceived as a tool to add music to his previously published game, an abstract shooter called Lancaster for the Apple II. Music Construction Set was ported to the Atari 8-bit family, Commodore 64, IBM PC (as a booter), and the Atari ST. Two years later, in 1986, Will Harvey released a port for the 16-bit Apple IIGS, utilizing its advanced sound. Also that year, a redesigned version for the Amiga and Macintosh was released as Deluxe Music Construction Set. Overview With MCS, a user can create musical composition via a graphical user interface, a novel concept at the time of its release. Users can drag and drop notes right onto the staff, play back their creations through the computer's speakers, and print them out. The program comes with a few popular songs as samples. Most versions of this program require the users to use a joystick to create their songs, note by note. The original Apple II version supports the Mockingboard expansion card for higher fidelity sound output. In addition, use of the Mockingboard allows the musical staff to scroll along with the music as notes are played. Without it, the Apple II can not update the display while playback is in progress. Ports Electronic Arts ported MCS from the original Apple II version to the Atari 8-bit family, IBM PC, and the Commodore 64. The Atari 8-bit and Commodore 64 versions use the multi-channel audio hardware of those systems. The IBM PC version allows output audio via the IBM PC Model 5150's cassette port, so 4-voice music can be sent to a stereo system. It also takes advantage of the 3-voice sound chip built into the IBM PCjr and Tandy 1000. The Apple IIGS version was done by the original programmer, Will Harvey, in 1986. This port ta
https://en.wikipedia.org/wiki/Herbert%20Feigl
Herbert Feigl (; ; December 14, 1902 – June 1, 1988) was an Austrian-American philosopher and an early member of the Vienna Circle. He coined the term "nomological danglers". Biography The son of a trained weaver who became a textile designer, Feigl was born in Reichenberg (Liberec), Bohemia, into a Jewish (though not religious) family. He matriculated at the University of Vienna in 1922 and studied physics and philosophy under Moritz Schlick, Hans Hahn, Hans Thirring, and Karl Bühler. He became one of the members of the Vienna Circle in 1924 and would be one of the few Circle members (along with Schlick and Friedrich Waismann) to have extensive conversations with Ludwig Wittgenstein and Karl Popper. Feigl received his doctorate at Vienna in 1927 for his dissertation Zufall und Gesetz: Versuch einer naturerkenntnistheoretischen Klarung des Wahrscheinlichkeits- und Induktionsproblems (Chance and Law: An Epistemological Analysis of the Roles of Probability and Induction in the Natural Sciences). He published his first book, Theorie und Erfahrung in der Physik (Theory and Experience in Physics), in 1929. In 1930, on an International Rockefeller Foundation scholarship at Harvard University, Feigl met the physicist Percy Williams Bridgman, the philosopher Willard Van Orman Quine, and the psychologist Stanley Smith Stevens, all of whom he saw as kindred spirits. In 1931, with Albert Blumberg, he published the paper "Logical Positivism: A New European Movement" which argued for logical positivism to be renamed "logical empiricism" based upon certain realist differences between contemporary philosophy of science and the older positivist movement. In 1930, Feigl married Maria Kaspar and emigrated with her to the United States, settling in Iowa to take up a position in the philosophy department at the University of Iowa. Their son, Eric Otto, was born in 1933. In 1940, Herbert Feigl accepted a position as professor of philosophy at the University of Minnesota, where he re
https://en.wikipedia.org/wiki/Disproportionation
In chemistry, disproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation states. The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as synproportionation. More generally, the term can be applied to any desymmetrizing reaction where two molecules of one type react to give one each of two different types: 2A -> A' + A'' This expanded definition is not limited to redox reactions, but also includes some molecular autoionization reactions, such as the self-ionization of water. History The first disproportionation reaction to be studied in detail was: 2 Sn^2+ -> Sn^4+ + Sn This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it . Examples Mercury(I) chloride disproportionates upon UV-irradiation: Hg2Cl2 -> HgCl2 + Hg Phosphorous acid disproportionates upon heating to give phosphoric acid and phosphine: 4 H3PO3 -> 3 H3PO4 + PH3 Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate: 2 HCO3- -> CO3^{2}- + H2CO3 The oxidation numbers remain constant in this acid-base reaction. Another variant on disproportionation is radical disproportionation, in which two radicals form an alkene and an alkane. {2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3} Disproportionation of sulfur intermediates by microorganisms are widely observed in sediments. 4 S^0 + 4 H2O -> 3 H2S + SO4^{2}- + 2 H+ 3 S^0 + 2 FeOOH -> SO4^{2}- + 2FeS + 2 H+ 4 SO3^{2}- + 2 H+ -> H2S + SO4^{2}- Chlorine gas reacts with dilute sodium hydroxide to form sodium chloride, sodium chlorate and water. The ionic equation for this reaction is as follows:3Cl2 + 6 OH- -> 5 Cl- + ClO3- + 3 H2O The chlori
https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz%20lattice%20basis%20reduction%20algorithm
The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L (a discrete subgroup of Rn) with , the LLL algorithm calculates an LLL-reduced (short, nearly orthogonal) lattice basis in time where is the largest length of under the Euclidean norm, that is, . The original applications were to give polynomial-time algorithms for factorizing polynomials with rational coefficients, for finding simultaneous rational approximations to real numbers, and for solving the integer linear programming problem in fixed dimensions. LLL reduction The precise definition of LLL-reduced is as follows: Given a basis define its Gram–Schmidt process orthogonal basis and the Gram-Schmidt coefficients for any . Then the basis is LLL-reduced if there exists a parameter in such that the following holds: (size-reduced) For . By definition, this property guarantees the length reduction of the ordered basis. (Lovász condition) For k = 2,3,..,n . Here, estimating the value of the parameter, we can conclude how well the basis is reduced. Greater values of lead to stronger reductions of the basis. Initially, A. Lenstra, H. Lenstra and L. Lovász demonstrated the LLL-reduction algorithm for . Note that although LLL-reduction is well-defined for , the polynomial-time complexity is guaranteed only for in . The LLL algorithm computes LLL-reduced bases. There is no known efficient algorithm to compute a basis in which the basis vectors are as short as possible for lattices of dimensions greater than 4. However, an LLL-reduced basis is nearly as short as possible, in the sense that there are absolute bounds such that the first basis vector is no more than times as long as a shortest vector in the lattice, the second basis vector is likewise within of the second successive minimum, and so o
https://en.wikipedia.org/wiki/Cross-entropy
In information theory, the cross-entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution . Definition The cross-entropy of the distribution relative to a distribution over a given set is defined as follows: , where is the expected value operator with respect to the distribution . The definition may be formulated using the Kullback–Leibler divergence , divergence of from (also known as the relative entropy of with respect to ). where is the entropy of . For discrete probability distributions and with the same support this means The situation for continuous distributions is analogous. We have to assume that and are absolutely continuous with respect to some reference measure (usually is a Lebesgue measure on a Borel σ-algebra). Let and be probability density functions of and with respect to . Then and therefore NB: The notation is also used for a different concept, the joint entropy of and . Motivation In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value out of a set of possibilities can be seen as representing an implicit probability distribution over , where is the length of the code for in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution is assumed while the data actually follows a distribution . That is why the expectation is taken over the true probability distribution and not . Indeed the expected message-length under the true distribution is Estimation There are many situations where cross-entropy needs to be measured but the distribution of is unknown. An example is language modeling, where a model is created based o
https://en.wikipedia.org/wiki/Tomahawk%20%28geometry%29
The tomahawk is a tool in geometry for angle trisection, the problem of splitting an angle into three equal parts. The boundaries of its shape include a semicircle and two line segments, arranged in a way that resembles a tomahawk, a Native American axe. The same tool has also been called the shoemaker's knife, but that name is more commonly used in geometry to refer to a different shape, the arbelos (a curvilinear triangle bounded by three mutually tangent semicircles). Description The basic shape of a tomahawk consists of a semicircle (the "blade" of the tomahawk), with a line segment the length of the radius extending along the same line as the diameter of the semicircle (the tip of which is the "spike" of the tomahawk), and with another line segment of arbitrary length (the "handle" of the tomahawk) perpendicular to the diameter. In order to make it into a physical tool, its handle and spike may be thickened, as long as the line segment along the handle continues to be part of the boundary of the shape. Unlike a related trisection using a carpenter's square, the other side of the thickened handle does not need to be made parallel to this line segment. In some sources a full circle rather than a semicircle is used, or the tomahawk is also thickened along the diameter of its semicircle, but these modifications make no difference to the action of the tomahawk as a trisector. Trisection To use the tomahawk to trisect an angle, it is placed with its handle line touching the apex of the angle, with the blade inside the angle, tangent to one of the two rays forming the angle, and with the spike touching the other ray of the angle. One of the two trisecting lines then lies on the handle segment, and the other passes through the center point of the semicircle. If the angle to be trisected is too sharp relative to the length of the tomahawk's handle, it may not be possible to fit the tomahawk into the angle in this way, but this difficulty may be worked around by re
https://en.wikipedia.org/wiki/1%2C1%2C1%2C2-Tetrafluoroethane
1,1,1,2-Tetrafluoroethane (also known as norflurane (INN), R-134a, Klea®134a, Freon 134a, Forane 134a, Genetron 134a, Green Gas, Florasol 134a, Suva 134a, or HFC-134a) is a hydrofluorocarbon (HFC) and haloalkane refrigerant with thermodynamic properties similar to R-12 (dichlorodifluoromethane) but with insignificant ozone depletion potential and a lower 100-year global warming potential (1,430, compared to R-12's GWP of 10,900). It has the formula CFCHF and a boiling point of −26.3 °C (−15.34 °F) at atmospheric pressure. R-134a cylinders are colored light blue. A phaseout and transition to HFO-1234yf and other refrigerants, with GWPs similar to CO2, began in 2012 within the automotive market. Uses 1,1,1,2-Tetrafluoroethane is a non-flammable gas used primarily as a "high-temperature" refrigerant for domestic refrigeration and automobile air conditioners. These devices began using 1,1,1,2-tetrafluoroethane in the early 1990s as a replacement for the more environmentally harmful R-12. Retrofit kits are available to convert units that were originally R-12-equipped. Other common uses include plastic foam blowing, as a cleaning solvent, a propellant for the delivery of pharmaceuticals (e.g. bronchodilators), wine cork removers, gas dusters ("canned air"), and in air driers for removing the moisture from compressed air. 1,1,1,2-Tetrafluoroethane has also been used to cool computers in some overclocking attempts. It is the refrigerant used in plumbing pipe freeze kits. It is also commonly used as a propellant for airsoft airguns. The gas is often mixed with a silicone-based lubricant. Aspirational and niche applications 1,1,1,2-Tetrafluoroethane is also being considered as an organic solvent, both in liquid and supercritical fluid. It is used in the resistive plate chamber particle detectors in the Large Hadron Collider. It is also used for other types of particle detectors, e.g. some cryogenic particle detectors. It can be used as an alternative to sulfur hexaf
https://en.wikipedia.org/wiki/Recursive%20filter
In signal processing, a recursive filter is a type of filter which reuses one or more of its outputs as an input. This feedback typically results in an unending impulse response (commonly referred to as infinite impulse response (IIR)), characterised by either exponentially growing, decaying, or sinusoidal signal output components. However, a recursive filter does not always have an infinite impulse response. Some implementations of moving average filter are recursive filters but with a finite impulse response. Non-recursive Filter Example: y[n] = 0.5x[n − 1] + 0.5x[n]. Recursive Filter Example: y[n] = 0.5y[n − 1] + 0.5x[n]. Examples of recursive filters Kalman filter Signal processing Weblinks IIR Filter Design auf Google Play Store
https://en.wikipedia.org/wiki/Atom%20economy
Atom economy (atom efficiency/percentage) is the conversion efficiency of a chemical process in terms of all atoms involved and the desired products produced. The simplest definition was introduced by Barry Trost in 1991 and is equal to the ratio between the mass of desired product to the total mass of products, expressed as a percentage. The concept of atom economy (AE) and the idea of making it a primary criterion for improvement in chemistry, is a part of the green chemistry movement that was championed by Paul Anastas from the early 1990s. Atom economy is an important concept of green chemistry philosophy, and one of the most widely used metrics for measuring the "greenness" of a process or synthesis. Good atom economy means most of the atoms of the reactants are incorporated in the desired products and only small amounts of unwanted byproducts are formed, reducing the economic and environmental impact of waste disposal. Atom economy can be written as: For example, if we consider the reaction where C is the desired product, then Optimal atom economy is 100%. Atom economy is a different concern than chemical yield, because a high-yielding process can still result in substantial byproducts. Examples include the Cannizzaro reaction, in which approximately 50% of the reactant aldehyde becomes the other oxidation state of the target; the Wittig and Suzuki reactions which use high-mass reagents that ultimately become waste; and the Gabriel synthesis, which produces a stoichiometric quantity of phthalic acid salts. If the desired product has an enantiomer the reaction needs to be sufficiently stereoselective even when atom economy is 100%. A Diels-Alder reaction is an example of a potentially very atom efficient reaction that also can be chemo-, regio-, diastereo- and enantioselective. Catalytic hydrogenation comes the closest to being an ideal reaction that is extensively practiced both industrially and academically. Atom economy can also be adjusted if a
https://en.wikipedia.org/wiki/Cascade%20Communications
Cascade Communications Corporation was a manufacturer of communications equipment based in Westford, Massachusetts. History Cascade was founded by Gururaj Deshpande in 1990, and was led by CEO Dan Smith, VP of Sales Mike Champa and CFO Paul Blondin. Cascade made a compact Frame Relay and Asynchronous Transfer Mode communication switches that were sold to telecommunication service providers worldwide. Frame Relay service was the primary data service used by companies in the mid-1990s to create secure internal communication networks between separate sites, and Cascade's equipment carried an estimated 70% of the world's Internet traffic during this time. Their most important direct competitor was StrataCom, which was acquired by Cisco Systems in 1996 for US $4B. In 1997, Ascend Communications acquired Cascade Communications for US $3.7 Billion, to move into ATM and Frame Relay markets. Ascend was later acquired by Lucent Technologies in 1999 in one of the largest mergers in communications equipment history (US $24 Billion). The Cascade portion of Ascend's business was more interesting to Lucent than the modem termination business that comprised the rest of Ascend. Legacy Both Desh Deshpande and CEO Dan Smith profited handsomely from the acquisition, as did hundreds of Cascade employees. In addition, Cascade was notable for invigorating the telecommunications startup culture in Massachusetts in the mid 1990s. Cascade alumni were founders or key contributors to many other financially successful Boston area telecom companies in the late 1990s. Deshpande and Smith went on to found Sycamore Networks and VP of Sales Mike Champa went on to found Omnia Communications and later was the CEO of Winphoria Networks.
https://en.wikipedia.org/wiki/Second%20moment%20of%20area
The second moment of area, or second area moment, or quadratic moment of area and also known as the area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an (for an axis that lies in the plane of the area) or with a (for an axis perpendicular to the plane). In both cases, it is calculated with a multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its unit of dimension, when working with the International System of Units, is meters to the fourth power, m4, or inches to the fourth power, in4, when working in the Imperial System of Units or the US customary system. In structural engineering, the second moment of area of a beam is an important property used in the calculation of the beam's deflection and the calculation of stress caused by a moment applied to the beam. In order to maximize the second moment of area, a large fraction of the cross-sectional area of an I-beam is located at the maximum possible distance from the centroid of the I-beam's cross-section. The planar second moment of area provides insight into a beam's resistance to bending due to an applied moment, force, or distributed load perpendicular to its neutral axis, as a function of its shape. The polar second moment of area provides insight into a beam's resistance to torsional deflection, due to an applied moment parallel to its cross-section, as a function of its shape. Different disciplines use the term moment of inertia (MOI) to refer to different moments. It may refer to either of the planar second moments of area (often or with respect to some reference plane), or the polar second moment of area (, where r is the distance to some reference axis). In each case the integral is over all the infinitesimal elements of area, dA, in some two-dimensional cross-section. In physics, moment of inertia is strict
https://en.wikipedia.org/wiki/Alpha%20%28finance%29
Alpha is a measure of the active return on an investment, the performance of that investment compared with a suitable market index. An alpha of 1% means the investment's return on investment over a selected period of time was 1% better than the market during that same period; a negative alpha means the investment underperformed the market. Alpha, along with beta, is one of two key coefficients in the capital asset pricing model used in modern portfolio theory and is closely related to other important quantities such as standard deviation, R-squared and the Sharpe ratio. In modern financial markets, where index funds are widely available for purchase, alpha is commonly used to judge the performance of mutual funds and similar investments. As these funds include various fees normally expressed in percent terms, the fund has to maintain an alpha greater than its fees in order to provide positive gains compared with an index fund. Historically, the vast majority of traditional funds have had negative alphas, which has led to a flight of capital to index funds and non-traditional hedge funds. It is also possible to analyze a portfolio of investments and calculate a theoretical performance, most commonly using the capital asset pricing model (CAPM). Returns on that portfolio can be compared with the theoretical returns, in which case the measure is known as Jensen's alpha. This is useful for non-traditional or highly focused funds, where a single stock index might not be representative of the investment's holdings. Definition in capital asset pricing model The alpha coefficient () is a parameter in the single-index model (SIM). It is the intercept of the security characteristic line (SCL), that is, the coefficient of the constant in a market model regression. where the following inputs are: : the realized return (on the portfolio), : the market return, : the risk-free rate of return, and : the beta of the portfolio. It can be shown that in an efficient market, th
https://en.wikipedia.org/wiki/Legendre%20sieve
In mathematics, the Legendre sieve, named after Adrien-Marie Legendre, is the simplest method in modern sieve theory. It applies the concept of the Sieve of Eratosthenes to find upper or lower bounds on the number of primes within a given set of integers. Because it is a simple extension of Eratosthenes' idea, it is sometimes called the Legendre–Eratosthenes sieve. Legendre's identity The central idea of the method is expressed by the following identity, sometimes called the Legendre identity: where A is a set of integers, P is a product of distinct primes, is the Möbius function, and is the set of integers in A divisible by d, and S(A, P) is defined to be: i.e. S(A, P) is the count of numbers in A with no factors common with P. Note that in the most typical case, A is all integers less than or equal to some real number X, P is the product of all primes less than or equal to some integer z < X, and then the Legendre identity becomes: (where denotes the floor function). In this example the fact that the Legendre identity is derived from the Sieve of Eratosthenes is clear: the first term is the number of integers below X, the second term removes the multiples of all primes, the third term adds back the multiples of two primes (which were miscounted by being "crossed out twice") but also adds back the multiples of three primes once too many, and so on until all (where denotes the number of primes below z) combinations of primes have been covered. Once S(A, P) has been calculated for this special case, it can be used to bound using the expression which follows immediately from the definition of S(A, P). Limitations The Legendre sieve has a problem with fractional parts of terms accumulating into a large error, which means the sieve only gives very weak bounds in most cases. For this reason it is almost never used in practice, having been superseded by other techniques such as the Brun sieve and Selberg sieve. However, since these more powerful siev
https://en.wikipedia.org/wiki/BUNCH
The BUNCH was the nickname for the group of mainframe computer competitors of IBM in the 1970s. The name is derived from the names of the five companies: Burroughs, UNIVAC, NCR, Control Data Corporation (CDC), and Honeywell. These companies were grouped together because the market share of IBM was much higher than all of its competitors put together. During the 1960s, IBM and these five computer manufacturers, along with RCA and General Electric, had been known as "IBM and the Seven Dwarfs". The description of IBM's competitors changed after GE's 1970 sale of its computer business to Honeywell and RCA's 1971 sale of its computer business to Sperry (who owned UNIVAC), leaving only five "dwarves". The companies' initials thus lent themselves to a new acronym, BUNCH. International Data Corporation estimated in 1984 that BUNCH would receive less than $2 billion of an estimated $11.4 billion in mainframe computer sales that year, with IBM receiving most of the remainder. IBM so dominated the mainframe market that observers expected the BUNCH to merge or exit the industry. BUNCH followed IBM into the microcomputer market with their own PC compatibles. but unlike that company did not quickly adjust to retail sales of smaller computers. Digital Equipment Corporation (DEC), at one point the second largest in the industry, was joined to BUNCH as DeBUNCH. Fate of BUNCH Burroughs & UNIVAC In September 1986, after Burroughs purchased Sperry (the parent company of UNIVAC), the name of the company was changed to Unisys. NCR In 1982, NCR became involved in open systems architecture, starting with the UNIX-powered TOWER 16/32, and placed more emphasis on computers smaller than mainframes. NCR was acquired by AT&T Corporation in 1991. A restructuring of AT&T in 1996 led to its re-establishment on 1 January 1997 as a separate company. In 1998, NCR sold its computer hardware manufacturing assets to Solectron and ceased to produce general-purpose computer systems. Control Data Corpo
https://en.wikipedia.org/wiki/Puzznic
is a tile-matching video game developed and released by Taito for arcades in 1989. It was ported to the Nintendo Entertainment System, Game Boy, PC Engine, X68000, Amiga, Atari ST, Amstrad CPC, Commodore 64, MS-DOS, and ZX Spectrum between 1990 and 1991. Home computer ports were handled by Ocean Software; the 2003 PlayStation port was handled by Altron. The arcade and FM Towns versions had adult content, showing a naked woman at the end of the level; this was removed in the international arcade release (but not the US one) and other home ports. A completed Apple IIGS version was cancelled after Taito America shut down. Puzznic bears strong graphical and some gameplay similarities to Taito's own Flipull/Plotting. Reception In Japan, Game Machine listed Puzznic on their December 1, 1989 issue as being the fourth-most-successful table arcade unit of the month. The game was ranked the 34th best game of all time by Amiga Power. Legacy Many clones share the same basic gameplay of Puzznic but have added extra features over the years: Puzztrix on the web and on PC, Addled and Germinal on the iPhone, Puzzled on mobile phones. A clone for the PC, Brix, was released by Epic MegaGames in 1992. For Android devices, a clone called PuzzMagic! appeared in 2015. An iPhone clone also available with the title Gem Panic. Blockbusterz Hard Puzzle Game for iOS appeared in 2019.
https://en.wikipedia.org/wiki/Schizocoely
Schizocoely (adjective forms: schizocoelous or schizocoelic) is a process by which some animal embryos develop. The schizocoely mechanism occurs when secondary body cavities (coeloms) are formed by splitting a solid mass of mesodermal embryonic tissue. All schizocoelomates are protostomians and they show holoblastic, spiral, determinate cleavage. Etymology The term schizocoely derives from the Ancient Greek words (), meaning 'to split', and (), meaning 'cavity'. This refers to the fact that fluid-filled body cavities are formed by splitting of mesodermal cells. Taxonomic distribution Animals called protostomes develop through schizocoely for which they are also known as schizocoelomates. Schizocoelous development often occurs in protostomes, as in phyla Mollusca, Annelida, and Arthropoda. Deuterostomes usually exhibit enterocoely; however, some deuterostomes like enteropneusts can exhibit schizocoely as well. Embryonic development The term refers to the order of organization of cells in the gastrula leading to development of the coelom. In mollusks, annelids, and arthropods, the mesoderm (the middle germ layer) forms as a solid mass of migrated cells from the single layer of the gastrula. The new mesoderm then splits, creating the pocket-like cavity of the coelom. See also Deuterostome Development of the digestive system Developmental biology Embryology Embryonic development Ontogeny Protostome
https://en.wikipedia.org/wiki/Invertase
β-Fructofuranosidase is an enzyme that catalyzes the hydrolysis (breakdown) of the table sugar sucrose into fructose and glucose. Alternative names for β-fructofuranosidase include invertase, saccharase, glucosucrase, β-fructosidase, invertin, sucrase, fructosylinvertase, alkaline invertase, acid invertase, and the systematic name: β-fructofuranosidase. The resulting mixture of fructose and glucose is called inverted sugar syrup. Related to invertases are sucrases. Invertases and sucrases hydrolyze sucrose to give the same mixture of glucose and fructose. Invertase is a glycoprotein that hydrolyses (cleaves) the non-reducing terminal β-fructofuranoside residues. Invertases cleave the O-C(fructose) bond, whereas the sucrases cleave the O-C(glucose) bond. Invertase cleaves the α-1,2-glycosidic bond of sucrose. For industrial use, invertase is usually derived from yeast. It is also synthesized by bees, which use it to make honey from nectar. Optimal temperature at which the rate of reaction is at its greatest is 60 °C and an optimum pH of 4.5. Typically, sugar is inverted with sulfuric acid. Invertase is produced by various organisms like yeast, fungi, bacteria, higher plants, and animals. For example: Saccharomyces cerevisiae, Saccharomyces carlsbergensis, S. pombe, Aspergillus spp, Penicillium chrysogenum, Azotobacter spp, Lactobacillus spp, Pseudomonas spp etc. Applications and examples Invertase is used to produce inverted sugar syrup. Invertase is expensive, so it may be preferable to make fructose from glucose using glucose isomerase, instead. Chocolate-covered candies, other cordials, and fondant candies include invertase, which liquefies the sugar. Inhibition Urea acts as a pure non-competitive inhibitor of invertase, presumably by breaking the intramolecular hydrogen bonds contributing to the tertiary structure of the enzyme. Structure and function Reaction pathway Invertase works to catalyze the cleavage of sucrose into its two monosaccharides, g
https://en.wikipedia.org/wiki/Passive%20optical%20network
A passive optical network (PON) is a fiber-optic telecommunications technology for delivering broadband network access to end-customers. Its architecture implements a point-to-multipoint topology in which a single optical fiber serves multiple endpoints by using unpowered (passive) fiber optic splitters to divide the fiber bandwidth among the endpoints. Passive optical networks are often referred to as the last mile between an Internet service provider (ISP) and its customers. Many fiber ISPs prefer this technology. Components and characteristics A passive optical network consists of an optical line terminal (OLT) at the service provider's central office (hub), passive (non-power-consuming) optical splitters, and a number of optical network units (ONUs) or optical network terminals (ONTs), which are near end users. A PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures. A passive optical network is a form of fiber-optic access network. In most cases, downstream signals are broadcast to all premises sharing multiple fibers. Encryption can prevent eavesdropping. Upstream signals are combined using a multiple access protocol, usually time-division multiple access (TDMA). History Passive optical networks were first proposed by British Telecommunications in 1987. Two major standard groups, the Institute of Electrical and Electronics Engineers (IEEE) and the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), develop standards along with a number of other industry organizations. The Society of Cable Telecommunications Engineers (SCTE) also specified radio frequency over glass for carrying signals over a passive optical network. FSAN and ITU Starting in 1995, work on fiber to the home architectures was done by the Full Service Access Network (FSAN) working group, formed by major telecommunications service providers and system vendors. The International Telecommun
https://en.wikipedia.org/wiki/Far%20pointer
In a segmented architecture computer, a far pointer is a pointer which includes a segment selector, making it possible to point to addresses outside of the default segment. Comparison and arithmetic on far pointers is problematic: there can be several different segment-offset address pairs pointing to one physical address. In 16-bit x86 For example, in an Intel 8086, as well as in later processors running 16-bit code, a far pointer has two parts: a 16-bit segment value, and a 16-bit offset value. A linear address is obtained by shifting the binary segment value four times to the left, and then adding the offset value. Hence the effective address is 20 bits (actually 21-bit, which led to the address wraparound and the Gate A20). There can be up to 4096 different segment-offset address pairs pointing to one physical address. To compare two far pointers, they must first be converted (normalized) to their 20-bit linear representation. On C compilers targeting the 8086 processor family, far pointers were declared using a non-standard "far" qualifier. For example, char far *p; defined a far pointer to a char. The difficulty of normalizing far pointers could be avoided with the non-standard "huge" qualifier. Example of far pointer: #include <stdio.h> int main() { char far *p =(char far *)0x55550005; char far *q =(char far *)0x53332225; *p = 80; (*p)++; printf("%d",*q); return 0; } Output of the following program: 81; Because both addresses point to same location. Physical Address = (value of segment register) * 0x10 + (value of offset). Location pointed to by pointer 'p' is : 0x5555 * 0x10 + 0x0005 = 0x55555 Location pointed to by pointer 'q' is : 0x5333 * 0x10 + 0x2225 = 0x55555 So, p and q both point to the same location 0x55555.
https://en.wikipedia.org/wiki/Defacement%20%28flag%29
In vexillology, defacement is the addition of a symbol or charge to a flag. For example, the New Zealand flag is the British Blue Ensign defaced with a Southern Cross in the fly. In the context of vexillology, the word "deface" carries no negative connotations, in contrast to general usage. It simply indicates a differentiation of the flag from that of another owner by addition of elements. For example, many state flags are formed by defacing the national flag with a coat of arms. History Where countries pass through changes of regime with contrasting ideological orientations (monarchist/republican, fascist/democratic, communist/capitalist, secular/religious etc.) – all of which, despite their differences, claim allegiance to a common national heritage expressed in a venerated national flag – it can happen that a new regime defaces that flag with its own specific emblem while keeping the basic flag design unchanged. Such changing ideological emblems appeared over time, among others, on the flags of Italy, Hungary, Romania, Germany (West and East; see illustration), Ethiopia, and Iran. As a result, during the Hungarian Revolution of 1956 and the Romanian Revolution of 1989, insurgents tore the emblem of the regime that they opposed out of the national flag and waved the flag with which they identified. An already defaced flag can be further defaced. For example, the Australian flag is a defaced British Blue Ensign. The Australian Border Force Flag is further defaced with the words "Australian border force" in block letters. In the United States, it is against the Flag Code to deface the national flag with advertising or with any other sigil, image, or insignia. Such flags are nevertheless commercially available, depicting the seals of various branches of the U.S. military, Native American-related objects such as tomahawks or war bonnets, and the like. It is common for association football supporters travelling abroad for a match to bring a national flag deface
https://en.wikipedia.org/wiki/Embarrassingly%20parallel
In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks. This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them. Thus, these are different from distributed computing problems that need communication between tasks, especially communication of intermediate results. They are easy to perform on server farms which lack the special infrastructure used in a true supercomputer cluster. They are thus well suited to large, Internet-based volunteer computing platforms such as BOINC, and do not suffer from parallel slowdown. The opposite of embarrassingly parallel problems are inherently serial problems, which cannot be parallelized at all. A common example of an embarrassingly parallel problem is 3D video rendering handled by a graphics processing unit, where each frame (forward method) or pixel (ray tracing method) can be handled with no interdependency. Some forms of password cracking are another embarrassingly parallel task that is easily distributed on central processing units, CPU cores, or clusters. Etymology "Embarrassingly" is used here to refer to parallelization problems which are "embarrassingly easy". The term may imply embarrassment on the part of developers or compilers: "Because so many important problems remain unsolved mainly due to their intrinsic computational complexity, it would be embarrassing not to develop parallel implementations of polynomial homotopy continuation methods." The term is first found in the literature in a 1986 book on multiprocessors by MATLAB's creator Cleve Moler, who claims to have invented the term. An alternative term, pleasingly parallel, has gained some use, perhaps to avoid the negative connotations of embarrassment
https://en.wikipedia.org/wiki/Wetting
Wetting is the ability of a liquid to maintain contact with a solid surface, resulting from intermolecular interactions when the two are brought together. This happens in presence of a gaseous phase or another liquid phase not miscible with the first one. The degree of wetting (wettability) is determined by a force balance between adhesive and cohesive forces. Wetting is important in the bonding or adherence of two materials. Wetting and the surface forces that control wetting are also responsible for other related effects, including capillary effects. There are two types of wetting: non-reactive wetting and reactive wetting. Wetting deals with three phases of matter: gas, liquid, and solid. It is now a center of attention in nanotechnology and nanoscience studies due to the advent of many nanomaterials in the past two decades (e.g. graphene, carbon nanotube, boron nitride nanomesh). Explanation Adhesive forces between a liquid and solid cause a liquid drop to spread across the surface. Cohesive forces within the liquid cause the drop to ball up and avoid contact with the surface. The contact angle (θ), as seen in Figure 1, is the angle at which the liquid–vapor interface meets the solid–liquid interface. The contact angle is determined by the balance between adhesive and cohesive forces. As the tendency of a drop to spread out over a flat, solid surface increases, the contact angle decreases. Thus, the contact angle provides an inverse measure of wettability. A contact angle less than 90° (low contact angle) usually indicates that wetting of the surface is very favorable, and the fluid will spread over a large area of the surface. Contact angles greater than 90° (high contact angle) generally mean that wetting of the surface is unfavorable, so the fluid will minimize contact with the surface and form a compact liquid droplet. For water, a wettable surface may also be termed hydrophilic and a nonwettable surface hydrophobic. Superhydrophobic surfaces have c
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20moment
The nuclear magnetic moment is the magnetic moment of an atomic nucleus and arises from the spin of the protons and neutrons. It is mainly a magnetic dipole moment; the quadrupole moment does cause some small shifts in the hyperfine structure as well. All nuclei that have nonzero spin also possess a nonzero magnetic moment and vice versa, although the connection between the two quantities is not straightforward or easy to calculate. The nuclear magnetic moment varies from isotope to isotope of an element. For a nucleus of which the numbers of protons and of neutrons are both even in its ground state (i.e. lowest energy state), the nuclear spin and magnetic moment are both always zero. In cases with odd numbers of either or both protons and neutrons, the nucleus often has nonzero spin and magnetic moment. The nuclear magnetic moment is not sum of nucleon magnetic moments, this property being assigned to the tensorial character of the nuclear force, such as in the case of the most simple nucleus where both proton and neutron appear, namely deuterium nucleus, deuteron. Measurement methods The methods for measuring nuclear magnetic moments can be divided into two broad groups in regard to the interaction with internal or external applied fields. Generally the methods based on external fields are more accurate. Different experimental techniques are designed in order to measure nuclear magnetic moments of a specific nuclear state. For instance, the following techniques are aimed to measure magnetic moments of an associated nuclear state in a range of life-times τ: Nuclear Magnetic Resonance (NMR) ms. Time Differential Perturbed Angular Distribution (TDPAD) s. Perturbed Angular Correlation (PAC) ns. Time Differential Recoil Into Vacuum (TDRIV) ps. Recoil Into Vacuum (RIV) ns. Transient Field (TF) ns. Techniques as Transient Field have allowed measuring the g factor in nuclear states with life-times of few ps or less. Shell model According to the shell mode
https://en.wikipedia.org/wiki/Entomophagy%20in%20humans
Entomophagy in humans or human entomophagy describes the consumption of insects (entomophagy) by humans in a cultural and biological context. The scientific term used in anthropology, cultural studies, biology and medicine is anthropo-entomophagy. Anthropo-entomophagy does not include the eating of arthropods other than insects such as arachnids and myriapods, which is defined as arachnophagy. Entomophagy is scientifically documented as widespread among non-human primates and common among many human communities. The eggs, larvae, pupae, and adults of certain insects have been eaten by humans from prehistoric times to the present day. Around 3,000 ethnic groups practice entomophagy. Human insect-eating (anthropo-entomophagy) is common to cultures in most parts of the world, including Central and South America, Africa, Asia, Australia, and New Zealand. Eighty percent of the world's nations eat insects of 1,000 to 2,000 species. FAO has registered some 1,900 edible insect species and estimates that there were, in 2005, some two billion insect consumers worldwide. FAO suggests eating insects as a possible solution to environmental degradation caused by livestock production. In some societies, primarily western nations, entomophagy is uncommon or taboo. Today, insect eating is uncommon in North America and Europe, but insects remain a popular food elsewhere, and some companies are trying to introduce insects as food into Western diets. Insects eaten around the world include crickets, cicadas, grasshoppers, ants, various beetle grubs (such as mealworms, the larvae of the darkling beetle), and various species of caterpillar (such as bamboo worms, mopani worms, silkworms and waxworms). History Precursors of Homo sapiens and insects consumption Evidence suggests that evolutionary precursors of Homo sapiens were entomophagous and arachnophagous. Insectivory also features to various degrees amongst extant primates, such as marmosets and tamarins, and some researchers s
https://en.wikipedia.org/wiki/Root%20hair
Root hair, or absorbent hairs, are outgrowths of epidermal cells, specialized cells at the tip of a plant root. They are lateral extensions of a single cell and are only rarely branched. They are found in the region of maturation, of the root. Root hair cells improve plant water absorption by increasing root surface area to volume ratio which allows the root hair cell to take in more water. The large vacuole inside root hair cells makes this intake much more efficient. Root hairs are also important for nutrient uptake as they are main interface between plants and mycorrhizal fungi. Function The function of all root hairs is to collect water and mineral nutrients in the soil to be sent throughout the plant. In roots, most water absorption happens through the root hairs. The length of root hairs allows them to penetrate between soil particles and prevents harmful bacterial organisms from entering the plant through the xylem vessels. Increasing the surface area of these hairs makes plants more efficient in absorbing nutrients and interacting with microbes. As root hair cells do not carry out photosynthesis, they do not contain chloroplasts. Importance Root hairs form an important surface as they are needed to absorb most of the water and nutrients needed for the plant. They are also directly involved in the formation of root nodules in legume plants. The root hairs curl around the bacteria, which allows for the formation of an infection thread into the dividing cortical cells to form the nodule. Having a large surface area, the active uptake of water and minerals through root hairs is highly efficient. Root hair cells also secrete acids (e.g., malic and citric acid), which solubilize minerals by changing their oxidation state, making the ions easier to absorb. Formation Root hair cells vary between 15 and 17 micrometers in diameter, and 80 and 1,500 micrometers in length. Root hairs are found only in the zone of maturation, also called the zone of differentiation.
https://en.wikipedia.org/wiki/Mass-independent%20fractionation
Mass-independent isotope fractionation or Non-mass-dependent fractionation (NMD), refers to any chemical or physical process that acts to separate isotopes, where the amount of separation does not scale in proportion with the difference in the masses of the isotopes. Most isotopic fractionations (including typical kinetic fractionations and equilibrium fractionations) are caused by the effects of the mass of an isotope on atomic or molecular velocities, diffusivities or bond strengths. Mass-independent fractionation processes are less common, occurring mainly in photochemical and spin-forbidden reactions. Observation of mass-independently fractionated materials can therefore be used to trace these types of reactions in nature and in laboratory experiments. Mass-independent fractionation in nature The most notable examples of mass-independent fractionation in nature are found in the isotopes of oxygen and sulfur. The first example was discovered by Robert N. Clayton, Toshiko Mayeda, and Lawrence Grossman in 1973, in the oxygen isotopic composition of refractory calcium–aluminium-rich inclusions in the Allende meteorite. The inclusions, thought to be among the oldest solid materials in the Solar System, show a pattern of low 18O/16O and 17O/16O relative to samples from the Earth and Moon. Both ratios vary by the same amount in the inclusions, although the mass difference between 18O and 16O is almost twice as large as the difference between 17O and 16O. Originally this was interpreted as evidence of incomplete mixing of 16O-rich material (created and distributed by a large star in a supernova) into the Solar nebula. However, recent measurement of the oxygen-isotope composition of the Solar wind, using samples collected by the Genesis spacecraft, shows that the most 16O-rich inclusions are close to the bulk composition of the solar system. This implies that Earth, the Moon, Mars, and asteroids all formed from 18O- and 17O-enriched material. Photodissociation of carb
https://en.wikipedia.org/wiki/Might%20and%20Magic%20II%3A%20Gates%20to%20Another%20World
Might and Magic II: Gates to Another World (also known as Might and Magic Book Two: Gates to Another World) is a role-playing video game developed and published by New World Computing in 1988. It is the sequel to Might and Magic Book One: The Secret of the Inner Sanctum. Gameplay After the events of Might and Magic Book One: The Secret of the Inner Sanctum, the adventurers who helped Corak defeat Sheltem on VARN take the "Gates to Another World" located in VARN to the land of CRON (Central Research Observational Nacelle). The land of CRON is facing many problems brought on by the encroachment of Sheltem and the adventurers must travel through CRON, the four elemental planes and even through time to help Corak stop Sheltem from flinging CRON into its sun. While in many ways Might and Magic II is an updated version of the original, the improved graphics help greatly with navigation, and the interface added several functions that facilitated gameplay, such as a "delay" selector which allowed for faster or slower response times, and a spinning cursor when input was required - all features lacking in Might and Magic Book One. As with Might and Magic Book One, the player used up to six player-generated characters at a time, and a total of twenty-six characters could be created, who thereafter stayed at the various inns across CRON. To continue game continuity it was possible to "import" the characters developed from the first game. Additionally, Might and Magic II became the first game in the series to utilize "hirelings", predefined characters which could extend the party to eight active characters. Hirelings were controlled like regular characters but required payment each day; pay increased with level. Other new features include two new character classes, an increased number of spells, the introduction of class "upgrade" quests and more than twice the number of mini-quests. Also added was "secondary skills" such as mountaineering (necessary for travelling mountai
https://en.wikipedia.org/wiki/Will%20Harvey
Will Harvey (born 1967) is an American software developer and Silicon Valley entrepreneur. He wrote Music Construction Set (1984) for the Apple II, the first commercial sheet music processor for home computers. Music Construction Set was ported to other systems by its publisher, Electronic Arts. He wrote two games for the Apple IIGS: Zany Golf (1988) and The Immortal (1990). Harvey founded two consumer virtual world Internet companies: IMVU, an instant messaging company, and There, Inc., an MMOG company. Career Education After high school, Harvey studied computer science at Stanford University, where he earned his Bachelor's, Master's, and Ph.D. During this period, he started two game development companies and published several additional software products through Electronic Arts. Early games Harvey went to the Nueva School for middle school. He attended Crystal Springs and Uplands for high school. The first game Harvey developed was an abstract shooter for the Apple II called Lancaster (1983). He said: Harvey contacted the president of Sirius, but the game was eventually released by minor publisher Silicon Valley Systems in 1983 and was not successful. The need for music in this game led to his development of 1984's Music Construction Set, published by Electronic Arts. It was a tremendous success. Following the success of Music Construction Set, Harvey ported Atari Games's Marble Madness to the Apple II and the Commodore 64 (1986) and developed two original games, Zany Golf (1988) and The Immortal (1990). All three projects were for Electronic Arts. The Immortal and Zany Golf were written for the Apple IIGS and ported to other systems by EA. Other companies In the mid-90s, Harvey founded Sandcastle, an Internet technology company that addressed the network latency problems underlying virtual worlds and massively multiplayer games. Sandcastle was acquired by Adobe Systems. Harvey was one of the chief technical architects at San Francisco game studio Rocket
https://en.wikipedia.org/wiki/Alternating%20sign%20matrix
In mathematics, an alternating sign matrix is a square matrix of 0s, 1s, and −1s such that the sum of each row and column is 1 and the nonzero entries in each row and column alternate in sign. These matrices generalize permutation matrices and arise naturally when using Dodgson condensation to compute a determinant. They are also closely related to the six-vertex model with domain wall boundary conditions from statistical mechanics. They were first defined by William Mills, David Robbins, and Howard Rumsey in the former context. Examples A permutation matrix is an alternating sign matrix, and an alternating sign matrix is a permutation matrix if and only if no entry equals . An example of an alternating sign matrix that is not a permutation matrix is Alternating sign matrix theorem The alternating sign matrix theorem states that the number of alternating sign matrices is The first few terms in this sequence for n = 0, 1, 2, 3, … are 1, 1, 2, 7, 42, 429, 7436, 218348, … . This theorem was first proved by Doron Zeilberger in 1992. In 1995, Greg Kuperberg gave a short proof based on the Yang–Baxter equation for the six-vertex model with domain-wall boundary conditions, that uses a determinant calculation due to Anatoli Izergin. In 2005, a third proof was given by Ilse Fischer using what is called the operator method. Razumov–Stroganov problem In 2001, A. Razumov and Y. Stroganov conjectured a connection between O(1) loop model, fully packed loop model (FPL) and ASMs. This conjecture was proved in 2010 by Cantini and Sportiello.
https://en.wikipedia.org/wiki/Go-Back-N%20ARQ
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which the sending process continues to send a number of frames specified by a window size even without receiving an acknowledgement (ACK) packet from the receiver. It is a special case of the general sliding window protocol with the transmit window size of and receive window size of 1. It can transmit frames to the peer before requiring an ACK. The receiver process keeps track of the sequence number of the next frame it expects to receive. It will discard any frame that does not have the exact sequence number it expects (either a duplicate frame it already acknowledged, or an out-of-order frame it expects to receive later) and will send an ACK for the last correct in-order frame. Once the sender has sent all of the frames in its window, it will detect that all of the frames since the first lost frame are outstanding, and will go back to the sequence number of the last ACK it received from the receiver process and fill its window starting with that frame and continue the process over again. Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike waiting for an acknowledgement for each packet, the connection is still being utilized as packets are being sent. In other words, during the time that would otherwise be spent waiting, more packets are being sent. However, this method also results in sending frames multiple times – if any frame was lost or damaged, or the ACK acknowledging them was lost or damaged, then that frame and all following frames in the send window (even if they were received without error) will be re-sent. To avoid this, Selective Repeat ARQ can be used. Pseudocode These examples assume an infinite number of sequence and request numbers. N := window size Rn := request number Sn := sequence number Sb := sequence base Sm := sequence max function receiver is Rn := 0 Do the following forever: if
https://en.wikipedia.org/wiki/Selective%20Repeat%20ARQ
Selective Repeat ARQ or Selective Reject ARQ is a specific instance of the automatic repeat request (ARQ) protocol used to manage sequence numbers and retransmissions in reliable communications. Summary Selective Repeat is part of the automatic repeat request (ARQ). With selective repeat, the sender sends a number of frames specified by a window size even without the need to wait for individual ACK from the receiver as in Go-Back-N ARQ. The receiver may selectively reject a single frame, which may be retransmitted alone; this contrasts with other forms of ARQ, which must send every frame from that point again. The receiver accepts out-of-order frames and buffers them. The sender individually retransmits frames that have timed out. Concept It may be used as a protocol for the delivery and acknowledgement of message units, or it may be used as a protocol for the delivery of subdivided message sub-units. When used as the protocol for the delivery of messages, the sending process continues to send a number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ, the receiving process will continue to accept and acknowledge frames sent after an initial error; this is the general case of the sliding window protocol with both transmit and receive window sizes greater than 1. The receiver process keeps track of the sequence number of the earliest frame it has not received, and sends that number with every acknowledgement (ACK) it sends. If a frame from the sender does not reach the receiver, the sender continues to send subsequent frames until it has emptied its window. The receiver continues to fill its receiving window with the subsequent frames, replying each time with an ACK containing the sequence number of the earliest missing frame. Once the sender has sent all the frames in its window, it re-sends the frame number given by the ACKs, and then continues where it left off. The size of the sending and receiving windows must be equal, and
https://en.wikipedia.org/wiki/Dodgson%20condensation
In mathematics, Dodgson condensation or method of contractants is a method of computing the determinants of square matrices. It is named for its inventor, Charles Lutwidge Dodgson (better known by his pseudonym, as Lewis Carroll, the popular author), who discovered it in 1866. The method in the case of an n × n matrix is to construct an (n − 1) × (n − 1) matrix, an (n − 2) × (n − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix. General method This algorithm can be described in the following four steps: Let A be the given n × n matrix. Arrange A so that no zeros occur in its interior. An explicit definition of interior would be all ai,j with . One can do this using any operation that one could normally perform without changing the value of the determinant, such as adding a multiple of one row to another. Create an (n − 1) × (n − 1) matrix B, consisting of the determinants of every 2 × 2 submatrix of A. Explicitly, we write Using this (n − 1) × (n − 1) matrix, perform step 2 to obtain an (n − 2) × (n − 2) matrix C. Divide each term in C by the corresponding term in the interior of A so . Let A = B, and B = C. Repeat step 3 as necessary until the 1 × 1 matrix is found; its only entry is the determinant. Examples Without zeros One wishes to find All of the interior elements are non-zero, so there is no need to re-arrange the matrix. We make a matrix of its 2 × 2 submatrices. We then find another matrix of determinants: We must then divide each element by the corresponding element of our original matrix. The interior of the original matrix is , so after dividing we get . The process must be repeated to arrive at a 1 × 1 matrix. Dividing by the interior of the 3 × 3 matrix, which is just −5, gives and −8 is indeed the determinant of the original matrix. With zeros Simply writing out the matrices: Here we run into trouble. If we continue the process, we will eventually be dividing by 0. We can perfor
https://en.wikipedia.org/wiki/Mutationism
Mutationism is one of several alternatives to evolution by natural selection that have existed both before and after the publication of Charles Darwin's 1859 book On the Origin of Species. In the theory, mutation was the source of novelty, creating new forms and new species, potentially instantaneously, in sudden jumps. This was envisaged as driving evolution, which was thought to be limited by the supply of mutations. Before Darwin, biologists commonly believed in saltationism, the possibility of large evolutionary jumps, including immediate speciation. For example, in 1822 Étienne Geoffroy Saint-Hilaire argued that species could be formed by sudden transformations, or what would later be called macromutation. Darwin opposed saltation, insisting on gradualism in evolution as geology's uniformitarianism. In 1864, Albert von Kölliker revived Geoffroy's theory. In 1901 the geneticist Hugo de Vries gave the name "mutation" to seemingly new forms that suddenly arose in his experiments on the evening primrose Oenothera lamarckiana. In the first decade of the 20th century, mutationism, or as de Vries named it mutationstheorie, became a rival to Darwinism supported for a while by geneticists including William Bateson, Thomas Hunt Morgan, and Reginald Punnett. Understanding of mutationism is clouded by the mid-20th century portrayal of the early mutationists by supporters of the modern synthesis as opponents of Darwinian evolution and rivals of the biometrics school who argued that selection operated on continuous variation. In this portrayal, mutationism was defeated by a synthesis of genetics and natural selection that supposedly started later, around 1918, with work by the mathematician Ronald Fisher. However, the alignment of Mendelian genetics and natural selection began as early as 1902 with a paper by Udny Yule, and built up with theoretical and experimental work in Europe and America. Despite the controversy, the early mutationists had by 1918 already accepted nat
https://en.wikipedia.org/wiki/Neuropathology
Neuropathology is the study of disease of nervous system tissue, usually in the form of either small surgical biopsies or whole-body autopsies. Neuropathologists usually work in a department of anatomic pathology, but work closely with the clinical disciplines of neurology, and neurosurgery, which often depend on neuropathology for a diagnosis. Neuropathology also relates to forensic pathology because brain disease or brain injury can be related to cause of death. Neuropathology should not be confused with neuropathy, which refers to disorders of the nerves themselves (usually in the peripheral nervous system) rather than the tissues. In neuropathology, the branches of the specializations of nervous system as well as the tissues come together into one field of study. Methodology The work of the neuropathologist consists largely of examining autopsy or biopsy tissue from the brain and spinal cord to aid in diagnosis of disease. Tissues are also observed through the eyes, muscles, surfaces of organs, and tumors. The biopsy is usually requested after a mass is detected by radiologic imaging, the imaging in turn driven by presenting signs and symptoms of a patient. CT scans are also used to discover issues in the patient. As for autopsies, the principal work of the neuropathologist is to help in the post-mortem diagnosis of various forms of dementia and other conditions that affect the central nervous system. Tissue samples are researched within the lab for diagnosis as well as forensic investigations. Biopsies can also consist of the skin. Epidermal nerve fiber density testing (ENFD) is a more recently developed neuropathology test in which a punch skin biopsy is taken to identify small fiber neuropathies by analyzing the nerve fibers of the skin. This pathology test is becoming available in select labs as well as many universities; it replaces the traditional sural nerve biopsy test as less invasive. It is used to identify painful small fiber neuropathies. Neuropa
https://en.wikipedia.org/wiki/Thrifty%20phenotype
Thrifty phenotype refers to the correlation between low birth weight of neonates and the increased risk of developing metabolic syndromes later in life, including type 2 diabetes and cardiovascular diseases. Although early life undernutrition is thought to be the key driving factor to the hypothesis, other environmental factors have been explored for their role in susceptibility, such as physical inactivity. Genes may also play a role in susceptibility of these diseases, as they may make individuals predisposed to factors that lead to increased disease risk. Historical overview The term thrifty phenotype was first coined by Charles Nicholas Hales and David Barker in a study published in 1992. In their study, the authors reviewed the literature up to and addressed five central questions regarding role of different factors in type 2 diabetes on which they based their hypothesis. These questions included the following: The role of beta cell deficiency in type 2 diabetes. The extent to which beta cell deficiency contributes to insulin intolerance. The role of major nutritional elements in fetal growth. The role of abnormal amino acid supply in growth limited neonates. The role of malnutrition in irreversibly defective beta cell growth. From the review of the existing literature, they posited that poor nutritional status in fetal and early neonatal stages could hamper the development and proper functioning of the pancreatic beta cells by impacting structural features of islet anatomy, which could consequently make the individual more susceptible to the development of type 2 diabetes in later life. However, they did not exclude other causal factors such as obesity, ageing and physical inactivity as determining factors of type 2 diabetes. In a later study, Barker et al. analyzed living patient data from Hertfordshire, UK, and found that men in their sixties having low birthweight (2.95 kg or less) were 10 times more likely to develop syndrome X (type 2 diabetes,
https://en.wikipedia.org/wiki/Ballyhoo%20%28video%20game%29
Ballyhoo is an interactive fiction video game designed by Jeff O'Neill and published by Infocom in 1985. The circus-themed game was released for ten systems, including DOS, Atari ST, and Commodore 64. Ballyhoo was labeled as "Standard" difficulty. It is Infocom's nineteenth game. Plot The player's character is bedazzled by the spectacle of the circus and the mystery of the performer's life. After attending a show of Tomas Munrab's "The Travelling Circus That Time Forgot", the player loiters near the tents instead of rushing through the exit. Maybe some clowns will practice a new act, or perhaps at least one of the trapeze artists will trip... Instead, the player overhears a strange conversation. The circus' owner has hired a drunken, inept detective to find his daughter Chelsea, who has been kidnapped. Munrab is convinced that it was an outside job; surely his loyal employees would never betray him like this! As the player begins to investigate the abduction, it soon becomes clear that the circus workers don't appreciate the intrusion. Their reactions range from indifference to hostility to attempted murder. In order to unravel the mystery, the player engages in a series of actions straight out of a circus fan's dream: dressing up as a clown, walking the high wire, and taming lions. Release Ballyhoo included the following physical items in the package: An "Official Souvenir Program" from The Traveling Circus That Time Forgot describing each of the featured acts and listing common circus slang A ticket to the circus A toy balloon imprinted with the circus' name and logo (blue was the most common color, although a few orange or black ones were also shipped) A trade card for "Dr. Nostrum's Extract", a fictitious patent medicine hailed as a "wondrous curative" containing 19% alcohol Reception Compute!'s Gazette in 1986 called Ballyhoo "richly evocative, often exasperating, and very clever". The magazine approved of the splendid feelies and "surprisingly flexible"
https://en.wikipedia.org/wiki/Smithsonian%20Tropical%20Research%20Institute
The Smithsonian Tropical Research Institute (STRI, ) is located in Panama and is the only bureau of the Smithsonian Institution based outside of the United States. It is dedicated to understanding the past, present, and future of tropical ecosystems and their relevance to human welfare. STRI grew out of a small field station established in 1923 on Barro Colorado Island in the Panama Canal Zone to become one of the world's leading tropical research organizations. STRI's facilities provide for long-term ecological studies in the tropics and are used by some 1,200 visiting scientists from academic and research institutions around the world every year. History Smithsonian scientists first came to Panama during the construction of the Panama Canal from 1904 to 1914. The Secretary of the Smithsonian Institution, Charles Doolittle Walcott, reached an agreement with Federico Boyd to conduct a biological inventory of the new Canal Zone in 1910, and this survey was subsequently extended to include all of Panama. Thanks largely to their efforts, the governor of the Canal Zone declared Barro Colorado Island (BCI) a biological reserve in 1923, making it one of the earliest biological reserves in the Americas. During the 1920s and 1930s BCI, in Gatun Lake, became an outdoor laboratory for scientists from U.S. universities and the Smithsonian Institution. By 1940, when BCI was designated the Canal Zone Biological Area (CZBA), more than 300 scientific publications had described the biota of BCI. In the Government Reorganization Act of 1946, BCI became a bureau of the Smithsonian Institution. The Smithsonian Tropical Research Institute (STRI) was created in 1966. With the establishment of STRI, permanent staff scientists were hired and fellowship programs were initiated to support aspiring tropical biologists. The first director after the name change was Martin Humphrey Moynihan. A strong relationship with the Republic of Panama was formalized in the Panama Canal Treaties of 197
https://en.wikipedia.org/wiki/Boxing%20kangaroo
The boxing kangaroo is a national symbol of Australia, frequently seen in pop culture. The symbol is often displayed prominently by Australian spectators at sporting events, such as at cricket, tennis, basketball and football matches, and at the Commonwealth and Olympic Games. The flag is also highly associated with its namesake national rugby league team – the Kangaroos. A distinctive flag featuring the symbol has since been considered Australia's sporting flag. History The idea of a boxing kangaroo originates from the animal's defensive behaviour, in which it will use its smaller forelegs (its arms) to hold an attacker in place while using the claws on its larger hind legs to try to kick, slash or disembowel them. This stance gives the impression that the kangaroo appears to be boxing with its attacker. The image of the boxing kangaroo has been known since at least 1891, when a cartoon titled "Jack, the fighting Kangaroo with Professor Lendermann" appeared in the magazine Melbourne Punch. In the late 19th century, outback travelling shows featured kangaroos wearing boxing gloves fighting against men. Das Boxende Känguruh, an 1895 German silent film directed by Max Skladanowsky, and an English silent film, The Boxing Kangaroo, produced by Birt Acres in 1896 also both featured kangaroos boxing against men, while the American animated shorts The Boxing Kangaroo (1920), Mickey's Kangaroo (1935) and Pop 'Im Pop! (1949) helped establish the concept of a boxing kangaroo as a popular culture cliché.. The 1933 film Hell Below features a boxing match between a kangaroo and Jimmy Durante. The 1978 Hollywood movie Matilda, which starred Elliott Gould and Robert Mitchum, featured a boxing kangaroo that was exploited for prize fighting. During World War II boxing kangaroos were stencilled on Australian fighter aircraft of No. 21 Squadron RAAF based in Singapore and Malaya to differentiate their aircraft from British planes. The practice soon spread to other units, as well as
https://en.wikipedia.org/wiki/Micropolis%20Corporation
Micropolis Corporation (styled as MICROPΩLIS) was a disk drive company located in Chatsworth, California and founded in 1976. Micropolis initially manufactured high capacity (for the time) hard-sectored 5.25-inch floppy drives and controllers, later manufacturing hard drives using SCSI and ESDI interfaces. History Micropolis's first advance was to take the existing 48 tpi (tracks per inch) standard created by Shugart Associates, and double both the track density and track recording density to get four times the total storage on a 5.25-inch floppy in the "MetaFloppy" series with quad density (Drives :1054, :1053, and :1043) around 1980. Micropolis pioneered 100 tpi density because of the attraction of an exact 100 tracks to the inch. "Micropolis-compatible" 5.25-inch 77-track diskette drives were also manufactured by Tandon (TM100-3M and TM100-4M). Such drives were used in a number of computers like in a Vector Graphic S-100 bus computer, the Durango F-85 and in a few Commodore disk drives (8050, 8250, 8250LP and SFD-1001). Micropolis later switched to 96 tpi when Shugart went to the 96 tpi standard, based on exact doubling of the 48 tpi standard. This allowed for backwards compatibility for reading by double stepping to read 48 tpi disks. Micropolis entered the hard disk business with an 8-inch hard drive, following Seagate's lead (Seagate was the next company Alan Shugart founded after Shugart Associates was sold). They later followed with a 5.25-inch hard drive. Micropolis started to manufacture drives in Singapore in 1986. Manufacturing of 3.5-inch hard disks started in 1991. Micropolis was one of the many hard drive manufacturers in the 1980s and 1990s that went out of business, merged, or closed their hard drive divisions; as a result of capacities and demand for products increased, and profits became hard to find. While Micropolis was able to hold on longer than many of the others, it ultimately sold its hard drive business to Singapore Technology (ST) in
https://en.wikipedia.org/wiki/System%20Management%20BIOS
In computing, the System Management BIOS (SMBIOS) specification defines data structures (and access methods) that can be used to read management information produced by the BIOS of a computer. This eliminates the need for the operating system to probe hardware directly to discover what devices are present in the computer. The SMBIOS specification is produced by the Distributed Management Task Force (DMTF), a non-profit standards development organization. The DMTF estimates that two billion client and server systems implement SMBIOS. The DMTF released the version 3.6.0 of the specification on June 20, 2022. SMBIOS was originally known as Desktop Management BIOS (DMIBIOS), since it interacted with the Desktop Management Interface (DMI). History Version 1 of the Desktop Management BIOS (DMIBIOS) specification was produced by Phoenix Technologies in or before 1996. Version 2.0 of the Desktop Management BIOS specification was released on March 6, 1996 by American Megatrends (AMI), Award Software, Dell, Intel, Phoenix Technologies, and SystemSoft Corporation. It introduced 16-bit plug-and-play functions used to access the structures from Windows 95. The last version to be published directly by vendors was 2.3 on August 12, 1998. The authors were American Megatrends, Award Software, Compaq, Dell, Hewlett-Packard, Intel, International Business Machines (IBM), Phoenix Technologies, and SystemSoft Corporation. Circa 1999, the Distributed Management Task Force (DMTF) took ownership of the specification. The first version published by the DMTF was 2.3.1 on March 16, 1999. At approximately the same time Microsoft started to require that OEMs and BIOS vendors support the interface/data-set in order to have Microsoft certification. Version 3.0.0, introduced in February 2015, added a 64-bit entry point, which can coexist with the previously defined 32-bit entry point. Version 3.4.0 was released in August 2020. Version 3.5.0 was released in September 2021. Version 3.6.0
https://en.wikipedia.org/wiki/Parent%E2%80%93offspring%20conflict
Parent–offspring conflict (POC) is an expression coined in 1974 by Robert Trivers. It is used to describe the evolutionary conflict arising from differences in optimal parental investment (PI) in an offspring from the standpoint of the parent and the offspring. PI is any investment by the parent in an individual offspring that decreases the parent's ability to invest in other offspring, while the selected offspring's chance of surviving increases. POC occurs in sexually reproducing species and is based on a genetic conflict: Parents are equally related to each of their offspring and are therefore expected to equalize their investment among them. Offspring are only half or less related to their siblings (and fully related to themselves), so they try to get more PI than the parents intended to provide even at their siblings' disadvantage. However, POC is limited by the close genetic relationship between parent and offspring: If an offspring obtains additional PI at the expense of its siblings, it decreases the number of its surviving siblings. Therefore, any gene in an offspring that leads to additional PI decreases (to some extent) the number of surviving copies of itself that may be located in siblings. Thus, if the costs in siblings are too high, such a gene might be selected against despite the benefit to the offspring. The problem of specifying how an individual is expected to weigh a relative against itself has been examined by W. D. Hamilton in 1964 in the context of kin selection. Hamilton's rule says that altruistic behavior will be positively selected if the benefit to the recipient multiplied by the genetic relatedness of the recipient to the performer is greater than the cost to the performer of a social act. Conversely, selfish behavior can only be favoured when Hamilton's inequality is not satisfied. This leads to the prediction that, other things being equal, POC will be stronger under half siblings (e.g., unrelated males father a female's successive
https://en.wikipedia.org/wiki/Kam%C4%81l%20al-D%C4%ABn%20al-F%C4%81ris%C4%AB
Kamal al-Din Hasan ibn Ali ibn Hasan al-Farisi or Abu Hasan Muhammad ibn Hasan (1267– 12 January 1319, long assumed to be 1320)) () was a Persian Muslim scientist. He made two major contributions to science, one on optics, the other on number theory. Farisi was a pupil of the astronomer and mathematician Qutb al-Din al-Shirazi, who in turn was a pupil of Nasir al-Din Tusi. According to Encyclopædia Iranica, Kamal al-Din was the most prominent Persian author on optics. Optics His work on optics was prompted by a question put to him concerning the refraction of light. Shirazi advised him to consult the Book of Optics of Ibn al-Haytham (Alhacen), and Farisi made such a deep study of this treatise that Shirazi suggested that he write what is essentially a revision of that major work, which came to be called the Tanqih. Qutb al-Din Al-Shirazi himself was writing a commentary on works of Avicenna at the time. Farisi is known for giving the first mathematically satisfactory explanation of the rainbow, and an explication of the nature of colours that reformed the theory of Ibn al-Haytham Alhazen. Farisi also "proposed a model where the ray of light from the sun was refracted twice by a water droplet, one or more reflections occurring between the two refractions." He verified this through extensive experimentation using a transparent sphere filled with water and a camera obscura. His research in this regard was based on theoretical investigations in dioptrics conducted on the so-called Burning Sphere (al-Kura al-muhriqa) in the tradition of Ibn Sahl (d. ca. 1000) and Ibn al-Haytham (d. ca. 1041) after him. As he noted in his Kitab Tanqih al-Manazir (The Revision of the Optics), Farisi used a large clear vessel of glass in the shape of a sphere, which was filled with water, in order to have an experimental large-scale model of a rain drop. He then placed this model within a camera obscura that has a controlled aperture for the introduction of light. He projected light
https://en.wikipedia.org/wiki/Philip%20Morrison
Philip Morrison (November 7, 1915 – April 22, 2005) was a professor of physics at the Massachusetts Institute of Technology (MIT). He is known for his work on the Manhattan Project during World War II, and for his later work in quantum physics, nuclear physics high energy astrophysics, and SETI. A graduate of Carnegie Tech, Morrison became interested in physics, which he studied at the University of California, Berkeley, under the supervision of J. Robert Oppenheimer. He also joined the Communist Party. During World War II he joined the Manhattan Project's Metallurgical Laboratory at the University of Chicago, where he worked with Eugene Wigner on the design of nuclear reactors. In 1944 he moved to the Manhattan Project's Los Alamos Laboratory in New Mexico, where he worked with George Kistiakowsky on the development of explosive lenses required to detonate the implosion-type nuclear weapon. Morrison transported the core of the Trinity test device to the test site in the back seat of a Dodge sedan. As leader of Project Alberta's pit crew he helped load the atomic bombs on board the aircraft that participated in the atomic bombing of Hiroshima and Nagasaki. After the war ended, he traveled to Hiroshima as part of the Manhattan Project's mission to assess the damage. After the war he became a champion of nuclear nonproliferation. He wrote for the Bulletin of the Atomic Scientists, and helped found the Federation of American Scientists and the Institute for Defense and Disarmament Studies. He was one of the few ex-communists to remain employed and academically active throughout the 1950s, but his research turned away from nuclear physics towards astrophysics. He published papers on cosmic rays, and a 1958 paper of his is considered to mark the birth of gamma ray astronomy. He was also known for writing popular science books and articles, and appearing in television programs. Early life and education Philip Morrison was born in Somerville, New Jersey, November 7, 19
https://en.wikipedia.org/wiki/SCIgen
SCIgen is a paper generator that uses context-free grammar to randomly generate nonsense in the form of computer science research papers. Its original data source was a collection of computer science papers downloaded from CiteSeer. All elements of the papers are formed, including graphs, diagrams, and citations. Created by scientists at the Massachusetts Institute of Technology, its stated aim is "to maximize amusement, rather than coherence." Originally created in 2005 to expose the lack of scrutiny of submissions to conferences, the generator subsequently became used, primarily by Chinese academics, to create large numbers of fraudulent conference submissions, leading to the retraction of 122 SCIgen generated papers and the creation of detection software to combat its use. Sample output Opening abstract of Rooter: A Methodology for the Typical Unification of Access Points and Redundancy: Prominent results In 2005, a paper generated by SCIgen, Rooter: A Methodology for the Typical Unification of Access Points and Redundancy, was accepted as a non-reviewed paper to the 2005 World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) and the authors were invited to speak. The authors of SCIgen described their hoax on their website, and it soon received great publicity when picked up by Slashdot. WMSCI withdrew their invitation, but the SCIgen team went anyway, renting space in the hotel separately from the conference and delivering a series of randomly generated talks on their own "track". The organizer of these WMSCI conferences is Professor Nagib Callaos. From 2000 until 2005, the WMSCI was also sponsored by the Institute of Electrical and Electronics Engineers. The IEEE stopped granting sponsorship to Callaos from 2006 to 2008. Submitting the paper was a deliberate attempt to embarrass WMSCI, which the authors claim accepts low-quality papers and sends unsolicited requests for submissions in bulk to academics. As the SCIgen website states: Computi
https://en.wikipedia.org/wiki/Ageusia
Ageusia (from negative prefix a- and Ancient Greek γεῦσις geûsis 'taste') is the loss of taste functions of the tongue, particularly the inability to detect sweetness, sourness, bitterness, saltiness, and umami (meaning 'pleasant/savory taste'). It is sometimes confused with anosmia – a loss of the sense of smell. Because the tongue can only indicate texture and differentiate between sweet, sour, bitter, salty, and umami, most of what is perceived as the sense of taste is actually derived from smell. True ageusia is relatively rare compared to hypogeusia – a partial loss of taste – and dysgeusia – a distortion or alteration of taste. Causes The main causes of taste disorders are head trauma, infections of upper respiratory tract, exposure to toxic substances, iatrogenic causes, medicines, glossodynia ("burning mouth syndrome (BMS)") and COVID-19. Head trauma can cause lesions in regions of the central nervous system which are involved in processing taste stimuli, including thalamus, brain stem, and temporal lobes; it can also cause damage to neurological pathways involved in transmission of taste stimuli. Neurological damage Tissue damage to the nerves that support the tongue can cause ageusia, especially damage to the chorda tympani nerve and the glossopharyngeal nerve. The chorda tympani nerve passes taste for the front two-thirds of the tongue and the glossopharyngeal nerve passes taste for the back third of the tongue. The lingual nerve (which is a branch of the trigeminal V3 nerve, but carries taste sensation back to the chorda tympani nerve to the geniculate ganglion of the facial nerve) can also be damaged during otologic surgery, causing a feeling of metal taste. Problems with the endocrine system Deficiency of vitamin B3 (niacin) and zinc can cause problems with the endocrine system, which may cause taste loss or alteration. Disorders of the endocrine system, such as Cushing's syndrome, hypothyroidism and diabetes mellitus, can cause similar problems.
https://en.wikipedia.org/wiki/Visual%20Prolog
Visual Prolog, previously known as PDC Prolog and Turbo Prolog, is a strongly typed object-oriented extension of Prolog. As Turbo Prolog, it was marketed by Borland but it is now developed and marketed by the Danish firm PDC that originally created it. Visual Prolog can build Microsoft Windows GUI-applications, console applications, DLLs (dynamic link libraries), and CGI-programs. It can also link to COM components and to databases by means of ODBC. Visual Prolog contains a compiler which generates x86 and x86-64 machine code. Unlike standard Prolog, programs written in Visual Prolog are statically typed. This allows some errors to be caught at compile-time instead of run-time. History Hanoi example In the Towers of Hanoi example, the Prolog inference engine figures out how to move a stack of any number of progressively smaller disks, one at a time, from the left pole to the right pole in the described way, by means of a center as transit, so that there's never a bigger disk on top of a smaller disk. The predicate hanoi takes an integer indicating the number of disks as an initial argument. class hanoi predicates hanoi : (unsigned N). end class hanoi implement hanoi domains pole = left; center; right. clauses hanoi(N) :- move(N, left, center, right). class predicates move : (unsigned N, pole A, pole B, pole C). clauses move(0, _, _, _) :- !. move(N, A, B, C) :- move(N-1, A, C, B), stdio::writef("move a disc from % pole to the % pole\n", A, C), move(N-1, B, A, C). end implement hanoi goal console::init(), hanoi::hanoi(4). Reception Bruce F. Webster of BYTE praised Turbo Prolog in September 1986, stating that it was the first Borland product to excite him as much as Turbo Pascal did. He liked the user interface and low price, and reported that two BYU professors stated that it was superior to the Prolog they used at the university. While q
https://en.wikipedia.org/wiki/System%20equivalence
In the systems sciences system equivalence is the behavior of a parameter or component of a system in a way similar to a parameter or component of a different system. Similarity means that mathematically the parameters and components will be indistinguishable from each other. Equivalence can be very useful in understanding how complex systems work. Overview Examples of equivalent systems are first- and second-order (in the independent variable) translational, electrical, torsional, fluidic, and caloric systems. Equivalent systems can be used to change large and expensive mechanical, thermal, and fluid systems into a simple, cheaper electrical system. Then the electrical system can be analyzed to validate that the system dynamics will work as designed. This is a preliminary inexpensive way for engineers to test that their complex system performs the way they are expecting. This testing is necessary when designing new complex systems that have many components. Businesses do not want to spend millions of dollars on a system that does not perform the way that they were expecting. Using the equivalent system technique, engineers can verify and prove to the business that the system will work. This lowers the risk factor that the business is taking on the project. The following is a chart of equivalent variables for the different types of systems {| class="wikitable" |- ! System type ! Flow variable ! Effort variable ! Compliance ! Inductance ! Resistance |- | Mechanical | dx/dt | F = force | spring (k) | mass (m) | damper (c) |- | Electrical | i = current | V = voltage | capacitance (C) | inductance (L) | resistance (R) |- | Thermal | qh = heat flow rate | ∆T = change in temperature | object (C) | inductance (L) | conduction and convection (R) |- | Fluid | qm = mass flow rate, qv = volume flow rate | p = pressure, h = height | tank (C) | mass (m) | valve or orifice (R) |} Flow variable: moves through the system Effort variable: puts the system into action
https://en.wikipedia.org/wiki/Vaccine%20hesitancy
Vaccine hesitancy is a delay in acceptance, or refusal, of vaccines despite the availability of vaccine services and supporting evidence. The term covers refusals to vaccinate, delaying vaccines, accepting vaccines but remaining uncertain about their use, or using certain vaccines but not others. The scientific consensus that vaccines are generally safe and effective is overwhelming. Vaccine hesitancy often results in disease outbreaks and deaths from vaccine-preventable diseases. Therefore, the World Health Organization characterizes vaccine hesitancy as one of the top ten global health threats. Vaccine hesitancy is complex and context-specific, varying across time, place and vaccines. It can be influenced by factors such as lack of proper scientifically based knowledge and understanding about how vaccines are made or work, as well as psychological factors including fear of needles and distrust of public authorities, a person's lack of confidence (mistrust of the vaccine and/or healthcare provider), complacency (the person does not see a need for the vaccine or does not see the value of the vaccine), and convenience (access to vaccines). It has existed since the invention of vaccination and pre-dates the coining of the terms "vaccine" and "vaccination" by nearly eighty years. "Anti-vaccinationism" refers to total opposition to vaccination; in more recent years, anti-vaccinationists have been known as "anti-vaxxers" or "anti-vax". The specific hypotheses raised by anti-vaccination advocates have been found to change over time. Anti-vaccine activism has been increasingly connected to political and economic goals. Although myths, conspiracy theories, misinformation and disinformation spread by the anti-vaccination movement and fringe doctors leads to vaccine hesitancy and public debates around the medical, ethical, and legal issues related to vaccines, there is no serious hesitancy or debate within mainstream medical and scientific circles about the benefits of vac
https://en.wikipedia.org/wiki/Acid2
Acid2 is a webpage that test web browsers' functionality in displaying aspects of HTML markup, CSS 2.1 styling, PNG images, and data URIs. The test page was released on 13 April 2005 by the Web Standards Project. The Acid2 test page will be displayed correctly in any application that follows the World Wide Web Consortium and Internet Engineering Task Force specifications for these technologies. These specifications are known as web standards because they describe how technologies used on the web are expected to function. The Acid2 tests rendering flaws in web browsers and other applications that render HTML. Named after the acid test for gold, it was developed in the spirit of Acid1, a relatively narrow test of compliance with the Cascading Style Sheets 1.0 (CSS1) standard. As with Acid1, an application passes the test if the way it displays the test page matches a reference image. Acid2 was designed with Microsoft Internet Explorer particularly in mind. The creators of Acid2 were dismayed that Internet Explorer did not follow web standards. It was prone to display web pages differently from other browsers, causing web developers to spend time tweaking their web pages. Acid2 challenged Microsoft to make Internet Explorer comply with web standards. On 31 October 2005, Safari 2.0.2 became the first browser to pass Acid2. Opera, Konqueror, Firefox, and others followed. With the release of Internet Explorer 8 on 19 March 2009, the latest versions of all major desktop web browsers now pass the test. Acid2 was followed by Acid3. History Acid2 was first proposed by Håkon Wium Lie, chief technical officer of Opera Software and creator of the widely used Cascading Style Sheets web standard. In a 16 March 2005 article on CNET, Lie expressed dismay that Microsoft Internet Explorer did not properly support web standards and hence was not completely interoperable with other browsers. He announced that Acid2 would be a challenge to Microsoft to design Internet Explorer 7, the
https://en.wikipedia.org/wiki/Degree%20of%20parallelism
The degree of parallelism (DOP) is a metric which indicates how many operations can be or are being simultaneously executed by a computer. It is used as an indicator of the complexity of algorithms, and is especially useful for describing the performance of parallel programs and multi-processor systems. A program running on a parallel computer may utilize different numbers of processors at different times. For each time period, the number of processors used to execute a program is defined as the degree of parallelism. The plot of the DOP as a function of time for a given program is called the parallelism profile. See also Optical Multi-Tree with Shuffle Exchange
https://en.wikipedia.org/wiki/Convergent%20series
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series that is denoted The th partial sum is the sum of the first terms of the sequence; that is, A series is convergent (or converges) if the sequence of its partial sums tends to a limit; that means that, when adding one after the other in the order given by the indices, one gets partial sums that become closer and closer to a given number. More precisely, a series converges, if there exists a number such that for every arbitrarily small positive number , there is a (sufficiently large) integer such that for all , If the series is convergent, the (necessarily unique) number is called the sum of the series. The same notation is used for the series, and, if it is convergent, to its sum. This convention is similar to that which is used for addition: denotes the operation of adding and as well as the result of this addition, which is called the sum of and . Any series that is not convergent is said to be divergent or to diverge. Examples of convergent and divergent series The reciprocals of the positive integers produce a divergent series (harmonic series): Alternating the signs of the reciprocals of positive integers produces a convergent series (alternating harmonic series): The reciprocals of prime numbers produce a divergent series (so the set of primes is "large"; see divergence of the sum of the reciprocals of the primes): The reciprocals of triangular numbers produce a convergent series: The reciprocals of factorials produce a convergent series (see e): The reciprocals of square numbers produce a convergent series (the Basel problem): The reciprocals of powers of 2 produce a convergent series (so the set of powers of 2 is "small"): The reciprocals of powers of any n>1 produce a convergent series: Alternating the signs of reciprocals of powers of 2 also produces a convergent series:
https://en.wikipedia.org/wiki/Serial%20analysis%20of%20gene%20expression
Serial Analysis of Gene Expression (SAGE) is a transcriptomic technique used by molecular biologists to produce a snapshot of the messenger RNA population in a sample of interest in the form of small tags that correspond to fragments of those transcripts. Several variants have been developed since, most notably a more robust version, LongSAGE, RL-SAGE and the most recent SuperSAGE. Many of these have improved the technique with the capture of longer tags, enabling more confident identification of a source gene. Overview Briefly, SAGE experiments proceed as follows: The mRNA of an input sample (e.g. a tumour) is isolated and a reverse transcriptase and biotinylated primers are used to synthesize cDNA from mRNA. The cDNA is bound to Streptavidin beads via interaction with the biotin attached to the primers, and is then cleaved using a restriction endonuclease called an anchoring enzyme (AE). The location of the cleavage site and thus the length of the remaining cDNA bound to the bead will vary for each individual cDNA (mRNA). The cleaved cDNA downstream from the cleavage site is then discarded, and the remaining immobile cDNA fragments upstream from cleavage sites are divided in half and exposed to one of two adaptor oligonucleotides (A or B) containing several components in the following order upstream from the attachment site: 1) Sticky ends with the AE cut site to allow for attachment to cleaved cDNA; 2) A recognition site for a restriction endonuclease known as the tagging enzyme (TE), which cuts about 15 nucleotides downstream of its recognition site (within the original cDNA/mRNA sequence); 3) A short primer sequence unique to either adaptor A or B, which will later be used for further amplification via PCR. After adaptor ligation, cDNA are cleaved using TE to remove them from the beads, leaving only a short "tag" of about 11 nucleotides of original cDNA (15 nucleotides minus the 4 corresponding to the AE recognition site). The cleaved cDNA tags are th
https://en.wikipedia.org/wiki/Protein%20toxicity
Protein toxicity is the effect of the buildup of protein metabolic waste compounds, like urea, uric acid, ammonia, and creatinine. Protein toxicity has many causes, including urea cycle disorders, genetic mutations, excessive protein intake, and insufficient kidney function, such as chronic kidney disease and acute kidney injury. Symptoms of protein toxicity include unexplained vomiting and loss of appetite. Untreated protein toxicity can lead to serious complications such as seizures, encephalopathy, further kidney damage, and even death. Definition Protein toxicity occurs when protein metabolic wastes build up in the body. During protein metabolism, nitrogenous wastes such as urea, uric acid, ammonia, and creatinine are produced. These compounds are not utilized by the human body and are usually excreted by the kidney. However, due to conditions such as renal insufficiency, the under-functioning kidney is unable to excrete these metabolic wastes, causing them to accumulate in the body and lead to toxicity. Although there are many causes of protein toxicity, this condition is most prevalent in people with chronic kidney disease who consumes a protein-rich diet, specifically, proteins from animal sources that are rapidly digested and metabolized, causing the release of a high concentration of protein metabolic wastes in the blood stream rapidly. Causes and pathophysiology Protein toxicity has a significant role in neurodegenerative diseases. Whether it is due to high protein intake, pathological disorders lead to the accumulation of protein waste products, the no efficient metabolism of the proteins, or oligomerization of the amino acids from proteolysis. The mechanism by which protein can lead to well known neurodegenerative diseases includes transcriptions dysfunction, propagation, pathological cytoplasmic inclusions, mitochondrial and stress granule dysfunction. Ammonia, one of the waste products of protein metabolism, is very harmful, especially to the brai
https://en.wikipedia.org/wiki/Computerized%20adaptive%20testing
Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered. How it works CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores. The basic computer-adaptive testing method is an iterative algorithm with the following steps: The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability The chosen item is presented to the examinee, who then answers it correctly or incorrectly The ability estimate is updated, based on all prior answers Steps 1–3 are repeated until a termination criterion is met Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item. As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common
https://en.wikipedia.org/wiki/Trustworthy%20computing
The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002. History Until 1995, there were restrictions on commercial traffic over the Internet. On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives assigning "...the Internet this highest level of importance..." but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise but by mid 1995, they were testing their own web server, and on August 24, 1995, launched a major online service, MSN. The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, "Trust in Cyberspace". This report reviews the cost of un-trustworthy systems and identifies actions required for improvement. Microsoft and Trustworthy Computing Bill Gates launched Microsoft's "Trustworthy Computing" initiative with a January 15, 2002 memo, referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie. The move was reportedly prompted by the fact that they "...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks." such as Code Red, Nimda, Klez and Slammer. Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having "...made a positive impact on the industry...". The Trustworthy Computing campaign was t
https://en.wikipedia.org/wiki/Starch%20gelatinization
Starch gelatinization is a process of breaking down of intermolecular bonds of starch molecules in the presence of water and heat, allowing the hydrogen bonding sites (the hydroxyl hydrogen and oxygen) to engage more water. This irreversibly dissolves the starch granule in water. Water acts as a plasticizer. Gelatinization Process Three main processes happen to the starch granule: granule swelling, crystallite and double-helical melting, and amylose leaching. Granule swelling: During heating, water is first absorbed in the amorphous space of starch, which leads to a swelling phenomenon. Melting of double helical structures: Water then enters via amorphous regions into the tightly bound areas of double helical structures of amylopectin. At ambient temperatures these crystalline regions do not allow water to enter. Heat causes such regions to become diffuse, the amylose chains begin to dissolve, to separate into an amorphous form and the number and size of crystalline regions decreases. Under the microscope in polarized light starch loses its birefringence and its extinction cross. Amylose Leaching: Penetration of water thus increases the randomness in the starch granule structure, and causes swelling; eventually amylose molecules leach into the surrounding water and the granule structure disintegrates. The gelatinization temperature of starch depends upon plant type and the amount of water present, pH, types and concentration of salt, sugar, fat and protein in the recipe, as well as starch derivatisation technology are used. Some types of unmodified native starches start swelling at 55 °C, other types at 85 °C. The gelatinization temperature of modified starch depends on, for example, the degree of cross-linking, acid treatment, or acetylation. Gel temperature can also be modified by genetic manipulation of starch synthase genes. Gelatinization temperature also depends on the amount of damaged starch granules; these will swell faster. Damaged starch can be
https://en.wikipedia.org/wiki/Resurrection%20ecology
"Resurrection ecology" is an evolutionary biology technique whereby researchers hatch dormant eggs from lake sediments to study animals as they existed decades ago. It is a new approach that might allow scientists to observe evolution as it occurred, by comparing the animal forms hatched from older eggs with their extant descendants. This technique is particularly important because the live organisms hatched from egg banks can be used to learn about the evolution of behavioural, plastic or competitive traits that are not apparent from more traditional paleontological methods. One such researcher in the field is W. Charles Kerfoot of Michigan Technological University whose results were published in the journal Limnology and Oceanography. He reported on success in a search for "resting eggs" of zooplankton that are dormant in Portage Lake on Michigan's Upper Peninsula. The lake has undergone a considerable amount of change over the last 100 years including flooding by copper mine debris, dredging, and eutrophication. Others have used this technique to explore the evolutionary effects of eutrophication, predation, and metal contamination. Resurrection ecology provided the best empirical example of the "Red Queen Hypothesis" in nature. Any organism that produces a resting stage can be used for resurrection ecology. However, the most frequently used organism is the water flea, Daphnia. This genus has well-established protocols for lab experimentation and usually asexually reproduces allowing for experiments on many individuals with the same genotype. Although the more esoteric demonstration of natural selection is alone a valuable aspect of the study described, there is a clear ecological implication in the discovery that very old zooplankton eggs have survived in the lake: the potential still exists, if and when this environment is restored to something of a more pristine nature, for at least some of the original (pre-disturbance) inhabitants to re-establish populatio
https://en.wikipedia.org/wiki/Automatic%20variable
In computer programming, an automatic variable is a local variable which is allocated and deallocated automatically when program flow enters and leaves the variable's scope. The scope is the lexical context, particularly the function or block in which a variable is defined. Local data is typically (in most languages) invisible outside the function or lexical context where it is defined. Local data is also invisible and inaccessible to a called function, but is not deallocated, coming back in scope as the execution thread returns to the caller. Automatic local variables primarily applies to recursive lexically-scoped languages. Automatic local variables are normally allocated in the stack frame of the procedure in which they are declared. This was originally done to achieve re-entrancy and allowing recursion, a consideration that still applies today. The concept of automatic variables in recursive (and nested) functions in a lexically scoped language was introduced to the wider audience with ALGOL in the late 1950s, and further popularized by its many descendants. The term local variable is usually synonymous with automatic variable, since these are the same thing in many programming languages, but local is more general – most local variables are automatic local variables, but static local variables also exist, notably in C. For a static local variable, the allocation is static (the lifetime is the entire program execution), not automatic, but it is only in scope during the execution of the function. In specific programming languages C, C++ (Called automatic variables.) All variables declared within a block of code are automatic by default. An uninitialized automatic variable has an undefined value until it is assigned a valid value of its type. The storage-class specifier auto can be added to these variable declarations as well, but as they are all automatic by default this is entirely redundant and rarely done. In C, using the storage class register is a
https://en.wikipedia.org/wiki/Shmuel%20Winograd
Shmuel Winograd (; January 4, 1936 – March 25, 2019) was an Israeli-American computer scientist, noted for his contributions to computational complexity. He has proved several major results regarding the computational aspects of arithmetic; his contributions include the Coppersmith–Winograd algorithm and an algorithm for the fast Fourier transform which transforms it into a problem of computing convolutions which can be solved with another Winograd's algorithm. Winograd studied Electrical Engineering at the Massachusetts Institute of Technology, receiving his B.S. and M.S. degrees in 1959. He received his Ph.D. from the Courant Institute of Mathematical Sciences at New York University in 1968. He joined the research staff at IBM in 1961, eventually becoming director of the Mathematical Sciences Department there from 1970 to 1974 and 1980 to 1994. Honors IBM Fellow (1972) Fellow of the Institute of Electrical and Electronics Engineers (1974) W. Wallace McDowell Award (1974) Member, National Academy of Sciences (1978) Member, American Academy of Arts and Sciences (1983) Member, American Philosophical Society (1989) Fellow of the Association for Computing Machinery (1994) Books
https://en.wikipedia.org/wiki/Tazobactam
Tazobactam is a pharmaceutical drug that inhibits the action of bacterial β-lactamases, especially those belonging to the SHV-1 and TEM groups. It is commonly used as its sodium salt, tazobactam sodium. Tazobactam is combined with the extended spectrum β-lactam antibiotic piperacillin in the drug piperacillin/tazobactam, used in infections due to Pseudomonas aeruginosa. Tazobactam broadens the spectrum of piperacillin by making it effective against organisms that express β-lactamase and would normally degrade piperacillin. Tazobactam was patented in 1982 and came into medical use in 1992. See also Ceftolozane Sulbactam Clavulanate
https://en.wikipedia.org/wiki/Sequela
A sequela (, ; usually used in the plural, sequelae ) is a pathological condition resulting from a disease, injury, therapy, or other trauma. Derived from the Latin word meaning "sequel", it is used in the medical field to mean a complication or condition following a prior illness or disease. A typical sequela is a chronic complication of an acute condition—in other words, a long-term effect of a temporary disease or injury—which follows immediately from the condition. Sequelae differ from late effects, which can appear long after—even several decades after—the original condition has resolved. In general, non-medical usage, the terms sequela and sequelae mean consequence and consequences. Examples and uses Chronic kidney disease, for example, is sometimes a sequela of diabetes; "chronic constipation" or more accurately "obstipation" (that is, inability to pass stool or gas) is a sequela to an intestinal obstruction; and neck pain is a common sequela of whiplash or other trauma to the cervical vertebrae. Post-traumatic stress disorder may be a psychological sequela of rape. Sequelae of traumatic brain injury include headache, dizziness, anxiety, apathy, depression, aggression, cognitive impairments, personality changes, mania, and psychosis. COVID-19 is also known to cause post-acute sequelae, known as long COVID, post-COVID syndrome, or post-acute sequelae of COVID-19 (PASC). This refers to the continuation of COVID-19 symptoms or the development of new ones, four or more weeks after the initial infection; these symptoms may persist for weeks and months. Post-COVID syndrome can occur in individuals who were asymptomatic for COVID, as well as those ranging from mild illness to severe hospitalization. These most commonly reported sequelae include fatigue, shortness of breath, chest pain, loss of smell, and brain fog; symptoms drastically range, from mild illness to severe impairment. Some conditions may be diagnosed retrospectively from their sequelae. An examp
https://en.wikipedia.org/wiki/Dispersion%20%28water%20waves%29
In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces. As a result, water with a free surface is generally considered to be a dispersive medium. For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given (fixed) wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves (i.e. only forced by surface tension) propagate faster for shorter wavelengths. Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves. Frequency dispersion for surface gravity waves This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave. Wave propagation and dispersion The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation η( x, t ) is given by: where a is the amplitude (in metres) and θ = θ( x, t ) is the phase function (in radians), depending on the horizontal position ( x , in metres) and time ( t , in seconds):   with     and   where: λ is the wavelength (in metres), T is the period (in seconds), k is the wavenumber (in radians per metre) and ω is the angular frequency (in radians per second). Characteristic phases of a water wave are: the upward zero-crossing at θ = 0, the wave crest at θ = ½ π, th
https://en.wikipedia.org/wiki/Damping
In physical systems, damping is the loss of energy of an oscillating system by dissipation. Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. Examples of damping include viscous damping in a fluid (see viscous drag), surface friction, radiation, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur in biological systems and bikes (ex. Suspension (mechanics)). Damping is not to be confused with friction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping. The damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure describing how rapidly the oscillations decay from one bounce to the next. The damping ratio is a system parameter, denoted by ("zeta"), that can vary from undamped (), underdamped () through critically damped () to overdamped (). The behaviour of oscillating systems is often of interest in a diverse range of disciplines that include control engineering, chemical engineering, mechanical engineering, structural engineering, and electrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of an electric motor, but a normalised, or non-dimensionalised approach can be convenient in descr
https://en.wikipedia.org/wiki/Gamma%20spectroscopy
Gamma-ray spectroscopy is the qualitative study of the energy spectra of gamma-ray sources, such as in the nuclear industry, geochemical investigation, and astrophysics. Gamma-ray spectrometry, on the other hand, is the method used to acquire a quantitative spectrum measurement. Most radioactive sources produce gamma rays, which are of various energies and intensities. When these emissions are detected and analyzed with a spectroscopy system, a gamma-ray energy spectrum can be produced. A detailed analysis of this spectrum is typically used to determine the identity and quantity of gamma emitters present in a gamma source, and is a vital tool in radiometric assay. The gamma spectrum is characteristic of the gamma-emitting nuclides contained in the source, just like in an optical spectrometer, the optical spectrum is characteristic of the material contained in a sample. Gamma ray characteristics Gamma rays are the highest-energy form of electromagnetic radiation, being physically the same as all other forms (e.g., X-rays, visible light, infrared, radio) but having (in general) higher photon energy due to their shorter wavelength. Because of this, the energy of gamma-ray photons can be resolved individually, and a gamma-ray spectrometer can measure and display the energies of the gamma-ray photons detected. Radioactive nuclei (radionuclides) commonly emit gamma rays in the energy range from a few keV to ~10 MeV, corresponding to the typical energy levels in nuclei with reasonably long lifetimes. Such sources typically produce gamma-ray "line spectra" (i.e., many photons emitted at discrete energies), whereas much higher energies (upwards of 1 TeV) may occur in the continuum spectra observed in astrophysics and elementary particle physics. The difference between gamma rays and X-rays is somewhat blurred. Gamma rays are always from nuclear energy level transitions of atoms and are mono energetic, whereas X-rays are electrically generated (X-ray tube, linear accel
https://en.wikipedia.org/wiki/Ecological%20Genetics%20%28book%29
Ecological Genetics is a 1964 book by the British biologist E. B. Ford on ecological genetics. Ford founded the field and it is considered his magnum opus. The fourth and final edition was published in 1975. Ford's work was celebrated in 1971 by Ecological Genetics and Evolution, a series of essays edited by Robert Creed, publ. Blackwell, Oxford. This included contributions from Cyril Darlington, Miriam Rothschild, Theodosius Dobzhansky, Bryan Clarke, A.J. Cain, Sir Cyril Clarke and others. Ford and Ronald Fisher represented one side of a dispute with the American Sewall Wright over the relative roles of selection and drift in evolution. See also Papilio dardanus (the swallowtail butterfly that is the subject of chapter thirteen)
https://en.wikipedia.org/wiki/Transfer%20length%20method
The Transfer Length Method or the "Transmission Line Model" (both abbreviated as TLM) is a technique used in semiconductor physics and engineering to determine the specific contact resistivity between a metal and a semiconductor. TLM has been developed because with the ongoing device shrinkage in microelectronics the relative contribution of the contact resistance at metal-semiconductor interfaces in a device could not be neglected any more and an accurate measurement method for determining the specific contact resistivity was required. General description The goal of the transfer length method (TLM) is the determination of the specific contact resistivity of a metal-semiconductor junction. To create a metal-semiconductor junction a metal film is deposited on the surface of a semiconductor substrate. The TLM is usually used to determine the specific contact resistivity when the metal-semiconductor junction shows ohmic behaviour. In this case the contact resistivity can be defined as the voltage difference across the interfacial layer between the deposited metal and the semiconductor substrate divided by the current density which is defined as the current divided by the interfacial area through which the current is passing: In this definition of the specific contact resistivity refers to the voltage value just below the metal-semiconductor interfacial layer while represents the voltage value just above the metal-semiconductor interfacial layer. There are two different methods of performing TLM measurements which are both introduced in the remainder of this section. One is called just transfer length method while the other is named circular transfer length method (c-TLM). TLM To determine the specific contact resistivity an array of rectangular metal pads is deposited on the surface of a semiconductor substrate as it is depicted in the image to the right. The definition of the rectangular pads can be done by utilizing photolithography while the metal depo
https://en.wikipedia.org/wiki/Equianharmonic
In mathematics, and in particular the study of Weierstrass elliptic functions, the equianharmonic case occurs when the Weierstrass invariants satisfy g2 = 0 and g3 = 1. This page follows the terminology of Abramowitz and Stegun; see also the lemniscatic case. (These are special examples of complex multiplication.) In the equianharmonic case, the minimal half period ω2 is real and equal to where is the Gamma function. The half period is Here the period lattice is a real multiple of the Eisenstein integers. The constants e1, e2 and e3 are given by The case g2 = 0, g3 = a may be handled by a scaling transformation. Modular forms Elliptic curves Elliptic functions
https://en.wikipedia.org/wiki/East%20Melanesian%20Islands
The East Melanesian Islands, also known as the Solomons-Vanuatu-Bismarck moist forests, is a biogeographic region in the Melanesia subregion of Oceania. Biogeographically, the East Melanesian Islands are part of the Australasian realm. It is notable for its unique flora and fauna and species richness. The region is designated a biodiversity hotspot by Conservation International (CI), and one of the outstanding Global 200 ecoregions by the World Wide Fund for Nature (WWF). Geography As defined by CI, the hotspot lies east and north-east of New Guinea and encompasses some 1,600 islands with a land area of nearly 100,000 km2, including the Bismarck Archipelago (including the Admiralty Islands), the Santa Cruz Islands, the Solomon Islands Archipelago (including Bougainville Island), and the Vanuatu Islands. Politically, the hotspot includes the Islands Region of Papua New Guinea (including Bougainville) and all of Solomon Islands and Vanuatu. The East Melanesian Islands has many plants and some animals whose ancestors arrived from neighboring New Caledonia and New Guinea, but differ from those islands in that they were never joined to a continent. Ecoregions This hotspot includes a number of ecoregions that make up the northeastern portion of the Australasian realm. Admiralty Islands lowland rain forests — (Papua New Guinea) New Britain-New Ireland lowland rain forests — (Papua New Guinea) New Britain-New Ireland montane rain forests — (Papua New Guinea) Solomon Islands rain forests — (Solomon Islands, Papua New Guinea including Bougainville Island) Vanuatu rain forests — (Vanuatu, Solomon Islands) See also External links East Melanesian Islands (Conservation International) Solomons-Vanuatu-Bismarck moist forests: A Global 200 ecoregion (World Wide Fund for Nature) Ecoregions Geography of Melanesia Australasian realm Tropical and subtropical moist broadleaf forests Geography of Papua New Guinea Geography of the Solomon Islands Admiralty Islands Bismarck
https://en.wikipedia.org/wiki/Delta-sigma%20modulation
Delta-sigma (ΔΣ; or sigma-delta, ΣΔ) modulation is an oversampling method for encoding signals into low bit depth digital signals at a very high sample-frequency as part of the process of delta-sigma analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). Delta-sigma modulation achieves high quality by utilizing a negative feedback loop during quantization to the lower bit depth that continuously corrects quantization errors and moves quantization noise to higher frequencies well above the original signal's bandwidth. Subsequent low-pass filtering for demodulation easily removes this high frequency noise and time averages to achieve high accuracy in amplitude. Both ADCs and DACs can employ delta-sigma modulation. A delta-sigma ADC (e.g. Figure 1 top) encodes an analog signal using high-frequency delta-sigma modulation and then applies a digital filter to demodulate it to a high-bit digital output at a lower sampling-frequency. A delta-sigma DAC (e.g. Figure 1 bottom) encodes a high-resolution digital input signal into a lower-resolution but higher sample-frequency signal that may then be mapped to voltages and smoothed with an analog filter for demodulation. In both cases, the temporary use of a low bit depth signal at a higher sampling frequency simplifies circuit design and takes advantage of the efficiency and high accuracy in time of digital electronics. Primarily because of its cost efficiency and reduced circuit complexity, this technique has found increasing use in modern electronic components such as DACs, ADCs, frequency synthesizers, switched-mode power supplies and motor controllers. The coarsely-quantized output of a delta-sigma ADC is occasionally used directly in signal processing or as a representation for signal storage (e.g., Super Audio CD stores the raw output of a 1-bit delta-sigma modulator). Motivation When transmitting an analog signal directly, all noise in the system and transmission is added to the analog signal, re
https://en.wikipedia.org/wiki/Sergeant%20Kirk
Sergeant Kirk or Sgt. Kirk () is the main character of the Western comics series of the same title by Italian comic book creator Hugo Pratt and Argentine author Héctor Germán Oesterheld. Publication history The series, created in Argentina during the Golden Age of Argentine Comics, was first published in issue 225 of the weekly comics magazine Misterix on January 9, 1953. Sargento Kirk continued its run in Misterix until issue 475 on December 20, 1957, when Oesterheld's own publishing house Editorial Frontera was established, and the series resumed in the Frontera magazines Frontera Extra and Hora Cero Suplemento Semanal. It ran until 1961 with drawings by Pratt, Jorge Moliterni, Horacio Porreca and Gisela Dexter. An additional Sargento Kirk story was published in 1973 in the magazine Billiken, drawn by Gustavo Trigo. Synopsis Sergeant Kirk is a former soldier of the American Civil War who goes on to serve in the post-war Wild West. Forced to participate in a massacre of American Indians by the U.S. Army, Kirk deserts and devotes himself to defending the Indians. An essentially noble man, Kirk treats even his enemies with tolerance and humanitarianism. Among Kirk's companions appearing in the series are "El Corto", Dr. Forbes and the Native American boy, Maha. Magazine Sergeant Kirk () was chosen as the name of the magazine Pratt launched in Italy in July 1967, aided financially by the patron Florenzo Ivaldi. This carried showcased work from the Argentine period, as well as the series Luck Star O'Hara, Gli Scorpioni del Deserto, and the first publication of Corto Maltese in the story Una Ballata del Mare Salato. The magazine maintained regular distribution until December 1969 and was issued sporadically during the 1970s. Notes Sources Sgt. Kirk publications, Misterix publications Archivespratt Sgt. Kirk magazine dossier FFF External links Sergent Kirk French album publications Bedetheque Sergent Kirk Sergent Kirk Italian comics characters Sergent Kirk We
https://en.wikipedia.org/wiki/Social%20choice%20theory
Social choice theory or social choice is a theoretical framework for analysis of combining individual opinions, preferences, interests, or welfares to reach a collective decision or social welfare in some sense. Whereas choice theory is concerned with individuals making choices based on their preferences, social choice theory is concerned with how to translate the preferences of individuals into the preferences of a group. A non-theoretical example of a collective decision is enacting a law or set of laws under a constitution. Another example is voting, where individual preferences over candidates are collected to elect a person that best represents the group's preferences. Social choice blends elements of welfare economics and public choice theory. It is methodologically individualistic, in that it aggregates preferences and behaviors of individual members of society. Using elements of formal logic for generality, analysis proceeds from a set of seemingly reasonable axioms of social choice to form a social welfare function (or constitution). Results uncovered the logical incompatibility of various axioms, as in Arrow's theorem, revealing an aggregation problem and suggesting reformulation or theoretical triage in dropping some axiom(s). Overlap with public choice theory "Public choice" and "social choice" are heavily overlapping fields of endeavor. Social choice and public choice theory may overlap but are disjoint if narrowly construed. The Journal of Economic Literature classification codes place Social Choice under Microeconomics at JEL D71 (with Clubs, Committees, and Associations) whereas most Public Choice subcategories are in JEL D72 (Economic Models of Political Processes: Rent-Seeking, Elections, Legislatures, and Voting Behavior). Social choice theory (and public choice theory) dates from Condorcet's formulation of the voting paradox, though it arguably goes back further to Ramon Llull's 1299 publication. Kenneth Arrow's Social Choice and Individua
https://en.wikipedia.org/wiki/Warehouse%20management%20system
A warehouse management system (WMS) is a set of policies and processes intended to organise the work of a warehouse or distribution centre, and ensure that such a facility can operate efficiently and meet its objectives. In the 20th century the term 'warehouse management information system' was often used to distinguish software that fulfils this function from theoretical systems. Some smaller facilities may use spreadsheets or physical media like pen and paper to document their processes and activities, and this too can be considered a WMS. However, in contemporary usage, the term overwhelmingly refers to computer systems. The core function of a warehouse management system is to record the arrival and departure of inventory. From that starting point, features are added like recording the precise location of stock within the warehouse, optimising the use of available space, or coordinating tasks for maximum efficiency. There are 5 factors, that make it worth establishing or renewing a company’s WMS. A successful implementation of the new WMS will lead to many benefits, that will consequently help the company grow and gain loyal customers. Number one, helping not only logistics service providers but also their customers to plan the resources and inventory accordingly, is real-time inventory management. Furthermore, when a company screens/scans a product for every movement in the facility, the location of products, inventory control and other activities are clear and the possibility of mishandling any inventories declined greatly. The third factor that emphasizes the importance of WMS systems is faster product delivery, which is very valued in today’s fast-paced world with a highly competitive environment. The benefits of advanced WMS systems are not only seen when a company needs to send products to its customers/partners but when dealing with returns as well. Managing and taking care of customers’ returns becomes much easier and more effective if the company is a
https://en.wikipedia.org/wiki/Thymidine%20diphosphate
Thymidine diphosphate (TDP) or deoxythymidine diphosphate (dTDP) (also thymidine pyrophosphate, dTPP) is a nucleotide diphosphate. It is an ester of pyrophosphoric acid with the nucleoside thymidine. dTDP consists of the pyrophosphate group, the pentose sugar ribose, and the nucleobase thymine. Unlike the other deoxyribonucleotides, thymidine diphosphate does not always contain the "deoxy" prefix in its name. See also Nucleoside Nucleotide DNA RNA Oligonucleotide dTDP-glucose
https://en.wikipedia.org/wiki/Uridine%20diphosphate
Uridine diphosphate, abbreviated UDP, is a nucleotide diphosphate. It is an ester of pyrophosphoric acid with the nucleoside uridine. UDP consists of the pyrophosphate group, the pentose sugar ribose, and the nucleobase uracil. UDP is an important factor in glycogenesis. Before glucose can be stored as glycogen in the liver and muscles, the enzyme UDP-glucose pyrophosphorylase forms a UDP-glucose unit by combining glucose 1-phosphate with uridine triphosphate, cleaving a pyrophosphate ion in the process. Then, the enzyme glycogen synthase combines UDP-glucose units to form a glycogen chain. The UDP molecule is cleaved from the glucose ring during this process and can be reused by UDP-glucose pyrophosphorylase. See also DNA Nucleoside Nucleotide Oligonucleotide RNA UGGT