source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Significand
The significand (also mantissa or coefficient, sometimes also argument, or ambiguously fraction or characteristic) is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction. Example The number 123.45 can be represented as a decimal floating-point number with the integer 12345 as the significand and a 10−2 power term, also called characteristics, where −2 is the exponent (and 10 is the base). Its value is given by the following arithmetic: 123.45 = 12345 × 10−2. The same value can also be represented in normalized form with 1.2345 as the fractional coefficient, and +2 as the exponent (and 10 as the base): 123.45 = 1.2345 × 10+2. Schmid, however, called this representation with a significand ranging between 1.0 and 10 a modified normalized form. For base 2, this 1.xxxx form is also called a normalized significand. Finally, the value can be represented in the format given by the Language Independent Arithmetic standard and several programming language standards, including Ada, C, Fortran and Modula-2, as 123.45 = 0.12345 × 10+3. Schmid called this representation with a significand ranging between 0.1 and 1.0 the true normalized form. For base 2, this 0.xxxx form is also called a normed significand. Significands and the hidden bit For a normalized number, the most significant digit is always non-zero. When working in binary, this constraint uniquely determines this digit to always be 1; as such, it does not need to be explicitly stored, being called the hidden bit. The significand is characterized by its width in (binary) digits, and depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double-precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit s
https://en.wikipedia.org/wiki/Survival%20analysis
Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory or reliability analysis in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival? To answer such questions, it is necessary to define "lifetime". In the case of biological survival, death is unambiguous, but for mechanical reliability, failure may not be well-defined, for there may well be mechanical systems in which failure is partial, a matter of degree, or not otherwise localized in time. Even in biological problems, some events (for example, heart attack or other organ failure) may have the same ambiguity. The theory outlined below assumes well-defined events at specific times; other cases may be better treated by models which explicitly account for ambiguous events. More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an "event" in the survival analysis literature – traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken. Recurring event or repeated event models relax that assumption. The study of recurring events is relevant in systems reliability, and in many areas of social sciences and medical research. Introduction to survival analysis Survival analysis is used in several ways: To describe the survival times of members of a group Life tables Kaplan–Meier curves Survival function Hazard
https://en.wikipedia.org/wiki/Final%20Fantasy%20XI
also known as Final Fantasy XI Online, is a massively multiplayer online role-playing game (MMORPG), originally developed and published by Squaresoft and then published by Square Enix as the eleventh main installment of the Final Fantasy series. Designed and produced by Hiromichi Tanaka, it was released in Japan on May 16, 2002, for PlayStation 2 and Microsoft Windows-based personal computers in November of that year. The game was the first MMORPG to offer cross-platform play between PlayStation 2 and PC. It was later released for the Xbox 360 in April 2006. All versions of the game require a monthly subscription to play. The story is set in the fantasy world of Vana'diel, where player-created avatars can both compete and cooperate in a variety of objectives to develop an assortment of jobs, skills, and earn in-game item rewards. Players can undertake an array of quests and progress through the in-game hierarchy and through the major plot of the game. Since its debut in 2002, five expansion packs have been released along with six add-on scenarios. Each expansion pack and add-on brings a new major storyline to the Final Fantasy XI world, along with numerous areas, quests, events and item rewards. In 2015, Square Enix released the final main scenario for Final Fantasy XI titled Rhapsodies of Vana'diel. Final Fantasy XI became the final active server on the PlayStation 2 online service. Support for the PlayStation 2 and Xbox 360 versions was ultimately ended on March 31, 2016, leaving only the PC platform playable. A mobile client for the game was under development by Square Enix in collaboration with Korean developer Nexon, using Unreal Engine 4, but was cancelled in late 2020. A spin-off mobile game, Final Fantasy Grandmasters was released on September 30, 2015. As of September 2020, a new, episodic story series titled The Voracious Resurgence has since been added to the game. The storyline concluded in June 2023. In May 2022 rumors had circulated that FFXI may soo
https://en.wikipedia.org/wiki/Pyrenoid
Pyrenoids are sub-cellular micro-compartments found in chloroplasts of many algae, and in a single group of land plants, the hornworts. Pyrenoids are associated with the operation of a carbon-concentrating mechanism (CCM). Their main function is to act as centres of carbon dioxide (CO2) fixation, by generating and maintaining a CO2 rich environment around the photosynthetic enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO). Pyrenoids therefore seem to have a role analogous to that of carboxysomes in cyanobacteria. Algae are restricted to aqueous environments, even in aquatic habitats, and this has implications for their ability to access CO2 for photosynthesis. CO2 diffuses 10,000 times slower in water than in air, and is also slow to equilibrate. The result of this is that water, as a medium, is often easily depleted of CO2 and is slow to gain CO2 from the air. Finally, CO2 equilibrates with bicarbonate (HCO3−) when dissolved in water, and does so on a pH-dependent basis. In sea water for example, the pH is such that dissolved inorganic carbon (DIC) is mainly found in the form of HCO3−. The net result of this is a low concentration of free CO2 that is barely sufficient for an algal RuBisCO to run at a quarter of its maximum velocity, and thus, CO2 availability may sometimes represent a major limitation of algal photosynthesis. Discovery Pyrenoids were first described in 1803 by Vaucher (cited in Brown et al.). The term was first coined by Schmitz who also observed how algal chloroplasts formed de novo during cell division, leading Schimper to propose that chloroplasts were autonomous, and to surmise that all green plants had originated through the “unification of a colourless organism with one uniformly tinged with chlorophyll". From these pioneering observations, Mereschkowski eventually proposed, in the early 20th century, the symbiogenetic theory and the genetic independence of chloroplasts. In the following half-century, phycologists often us
https://en.wikipedia.org/wiki/James%20Harrison%20%28engineer%29
James Harrison (17 April 1816 – 3 September 1893) was a Scottish Victorian newspaper printer, journalist, politician, and pioneer in the field of mechanical refrigeration. Harrison founded the Geelong Advertiser newspaper and was a member of the Victorian Legislative Council and Victorian Legislative Assembly. Harrison is also remembered as the inventor of the mechanical refrigeration process creating ice and founder of the Victorian Ice Works and as a result, is often called "the father of refrigeration". In 1873 he won a gold medal at the Melbourne Exhibition by proving that meat kept frozen for months remained perfectly edible. Early life James Harrison was born at Bonhill, Dunbartonshire, the son of a fisherman. Harrison attended Anderson's University and then the Glasgow Mechanics' Institution, specialising in chemistry. He trained as a printing apprentice in Glasgow and worked in London as a compositor before emigrating to Sydney, Australia in 1837 to set up a printing press for the English company Tegg & Co. Moving to Melbourne in 1839 he found employment with John Pascoe Fawkner as a compositor and later editor on Fawkner's Port Phillip Patriot. When Fawkner acquired a new press, Harrison offered him 30 pounds for the original old press to start Geelong's first newspaper. The first weekly edition of the Geelong Advertiser appeared November 1840: edited by 'James Harrison and printed and published for John Pascoe Fawkner (sole proprietor) by William Watkins...'. By November 1842, Harrison became sole owner. Political career Harrison was a member of Geelong's first town council in 1850 and represented Geelong in the Victorian Legislative Council from November 1854 until its abolition in March 1856. Harrison then represented Geelong 1858–59 and Geelong West 1859–60 in the Victorian Legislative Assembly. As an editor he was an early advocate for tariff protection which later he brought to prominence when he was editor of The Age under the proprietorship of
https://en.wikipedia.org/wiki/Brodmann%20area
A Brodmann area is a region of the cerebral cortex, in the human or other primate brain, defined by its cytoarchitecture, or histological structure and organization of cells. The concept was first introduced by the German anatomist Korbinian Brodmann in the early 20th century. Brodmann mapped the human brain based on the varied cellular structure across the cortex and identified 52 distinct regions, which he numbered 1 to 52. These regions, or Brodmann areas, correspond with diverse functions including sensation, motor control, and cognition. History Brodmann areas were originally defined and numbered by the German anatomist Korbinian Brodmann based on the cytoarchitectural organization of neurons he observed in the cerebral cortex using the Nissl method of cell staining. Brodmann published his maps of cortical areas in humans, monkeys, and other species in 1909, along with many other findings and observations regarding the general cell types and laminar organization of the mammalian cortex. The same Brodmann area number in different species does not necessarily indicate homologous areas. A similar, but more detailed cortical map was published by Constantin von Economo and Georg N. Koskinas in 1925. Present importance Brodmann areas have been discussed, debated, refined, and renamed exhaustively for nearly a century and remain the most widely known and frequently cited cytoarchitectural organization of the human cortex. Many of the areas Brodmann defined based solely on their neuronal organization have since been correlated closely to diverse cortical functions. For example, Brodmann areas 1, 2 and 3 are the primary somatosensory cortex; area 4 is the primary motor cortex; area 17 is the primary visual cortex; and areas 41 and 42 correspond closely to primary auditory cortex. Higher order functions of the association cortical areas are also consistently localized to the same Brodmann areas by neurophysiological, functional imaging, and other methods (e.g., the
https://en.wikipedia.org/wiki/Great%20Dark%20Spot
The Great Dark Spot (also known as GDS-89, for Great Dark Spot, 1989) was one of a series of dark spots on Neptune similar in appearance to Jupiter's Great Red Spot. In 1989, GDS-89 was the first Great Dark Spot on Neptune to be observed by NASA's Voyager 2 space probe. Like Jupiter's spot, Great Dark Spots are anticyclonic storms. However, their interiors are relatively cloud-free, and unlike Jupiter's spot, which has lasted for hundreds of years, their lifetimes appear to be shorter, forming and dissipating once every few years or so. Based on observations taken with Voyager 2 and since then with the Hubble Space Telescope, Neptune appears to spend somewhat more than half its time with a Great Dark Spot. Little is known about the origins, movement, and disappearance of the dark spots observed on the planet since 1989. Characteristics The Great Dark Spot was captured by NASA's Voyager 2 space probe in Neptune's southern hemisphere. The dark, elliptically shaped spot (with initial dimensions of 13,000 × 6,600 km, or 8,100 × 4,100 mi of GDS-89 was about the same size as Earth, and was similar in general appearance to Jupiter's Great Red Spot. One major difference compared to Jupiter's Great Red Spot is that Neptune's Great Dark Spot has shown the ability to shift north-south over time, while the Great Red Spot is held in the same latitudinal region by global east-west wind currents. Around the edges of the storm, winds were measured at up to 2,100 kilometers per hour (1,300 mph), the fastest recorded in the Solar System. The Great Dark Spot is thought to be a hole in the methane cloud deck of Neptune. The spot was observed at different times with different sizes and shapes. The Great Dark Spot generated large white clouds at or just below the tropopause layer similar to high-altitude cirrus clouds found on Earth. Unlike the clouds on Earth, however, which are composed of crystals of water ice, Neptune's cirrus clouds are made up of crystals of frozen methane. Thes
https://en.wikipedia.org/wiki/LAN%20eXtensions%20for%20Instrumentation
LAN eXtensions for Instrumentation (LXI) is a standard developed by the LXI Consortium, a consortium that maintains the LXI specification and promotes the LXI Standard. The LXI standard defines the communication protocols for instrumentation and data acquisition systems using Ethernet. Ethernet is a ubiquitous communication standard providing a versatile interface, the LXI standard describes how to use the Ethernet standards for test and measurement applications in a way that promotes simple interoperability between instruments. The LXI Consortium ensures LXI compliant instrumentation developed by various vendors works together with no communication or setup issues. The LXI Consortium ensures that the LXI standard complements other test and measurement control systems, such as GPIB and PXI systems. Overview Proposed in 2005 by Keysight(formerly called Agilent Technologies) and VTI Instruments (formerly called VXI Technology and now part of Ametek), the LXI standard adapts the Ethernet and World Wide Web standards and applies them to test and measurement applications. The standard defines how existing standards should be used in instrumentation applications to provide a consistent feel and ensure compatibility between vendors equipment. The LXI standard does not define a mechanical format, allowing LXI solutions to take any physical form deemed suitable for products in their intended market. LXI products can be modular, rack mounted, bench mounted or take any other physical form. LXI supports synthetic instruments and peer-to-peer networking, providing a number of unique capabilities to the test engineer. LXI products may have no front panel or display, or they may include embedded keyboards and displays. Connections to the DUT are permitted to be on the front or the rear to suit market demand, most devices provide front panel connectivity to allow Ethernet and power connections to be provided to the rear panel. Use of Ethernet allows the simple construction of
https://en.wikipedia.org/wiki/Calcium%20metabolism
Calcium metabolism is the movement and regulation of calcium ions (Ca2+) in (via the gut) and out (via the gut and kidneys) of the body, and between body compartments: the blood plasma, the extracellular and intracellular fluids, and bone. Bone acts as a calcium storage center for deposits and withdrawals as needed by the blood via continual bone remodeling. An important aspect of calcium metabolism is plasma calcium homeostasis, the regulation of calcium ions in the blood plasma within narrow limits. The level of the calcium in plasma is regulated by the hormones parathyroid hormone (PTH) and calcitonin. PTH is released by the chief cells of the parathyroid glands when the plasma calcium level falls below the normal range in order to raise it; calcitonin is released by the parafollicular cells of the thyroid gland when the plasma level of calcium is above the normal range in order to lower it. Body compartment content Calcium is the most abundant mineral in the human body. The average adult body contains in total approximately 1 kg, 99% in the skeleton in the form of calcium phosphate salts. The extracellular fluid (ECF) contains approximately 22 mmol, of which about 9 mmol is in the plasma. Approximately 10 mmol of calcium is exchanged between bone and the ECF over a period of twenty-four hours. Blood concentration The concentration of calcium ions inside cells (in the intracellular fluid) is more than 7,000 times lower than in the blood plasma (i.e. at <0.0002 mmol/L, compared with 1.4 mmol/L in the plasma) Normal plasma levels The plasma total calcium concentration is in the range of 2.2–2.6 mmol/L (9–10.5 mg/dL), and the normal ionized calcium is 1.3–1.5 mmol/L (4.5–5.6 mg/dL). The amount of total calcium in the blood varies with the level of plasma albumin, the most abundant protein in plasma, and therefore the main carrier of protein-bound calcium in the blood. The biologic effect of calcium is, however, determined by the amount of ionized calc
https://en.wikipedia.org/wiki/Fibrocartilage%20callus
A fibrocartilage callus is a temporary formation of fibroblasts and chondroblasts which forms at the area of a bone fracture as the bone attempts to heal itself. The cells eventually dissipate and become dormant, lying in the resulting extracellular matrix that is the new bone. The callus is the first sign of union visible on x-rays, usually 3 weeks after the fracture. Callus formation is slower in adults than in children, and in cortical bones than in cancellous bones. See also Bone healing References Morgan, Elise F., et al. “Overview of Skeletal Repair (Fracture Healing and Its Assessment).” Methods in Molecular Biology Skeletal Development and Repair, 2014, pp. 13–31. External links Bone fractures Physiology
https://en.wikipedia.org/wiki/History%20of%20software%20configuration%20management
The history of software configuration management (SCM) in computing can be traced back as early as the 1950s, when CM (for Configuration Management), originally for hardware development and production control, was being applied to software development. The first software configuration management was most likely done manually. Eventually, software tools were written to manage software changes. History records tend to be based on tools and companies, and lend concepts to a secondary plane. Timeline Early 1960s or even late 1950s: CDC UPDATE and IBM IEB_UPDATE. Late 1960s into 1970s: The Librarian is released by Applied Data Research and provides an alternative to keeping programs on punched card decks for the IBM mainframe market. Late 1960s, early 1970s: Professor Leon Pressor at the University of California, Santa Barbara produced a thesis on change and configuration control. This concept was a response to a contract he was working on with a defense contractor who made aircraft engines for the US Navy. Early 1970s: Unix make. By 1970 CDC update was an advanced product. Circa 1972: Bell Labs paper describing the original diff algorithm. 1972, with an IEEE paper in 1975: source code control system, SCCS, Marc Rochkind Bell Labs. Originally programmed in SNOBOL for OS/360; subsequently rewritten in C for Unix (used diff for comparing files). 1970s: Lisle, Illinois-based Pansophic Systems offered PANVALET, which was an early source code control system for the mainframe market. 1975: Professor Pressor's work eventually grew into a commercially available product called Change and Configuration Control (CCC) which was sold by the SoftTool corporation. Revision Control System (RCS, Walter Tichy). Early 1980s: patch (around 1985, Larry Wall). 1984: Aide-de-Camp 1986: Concurrent Version System (CVS). 2000: Subversion initiated by CollabNet. Early 2000s (decade): distributed revision control systems like BitKeeper and GNU arch become viable. Background Unt
https://en.wikipedia.org/wiki/Nick%20Holonyak
Nick Holonyak Jr. ( ; November 3, 1928September 18, 2022) was an American engineer and educator. He is noted particularly for his 1962 invention and first demonstration of a semiconductor laser diode that emitted visible light. This device was the forerunner of the first generation of commercial light-emitting diodes (LEDs). He was then working at a General Electric Company research laboratory near Syracuse, New York. He left General Electric in 1963 and returned to his alma mater, the University of Illinois at Urbana-Champaign, where he later became John Bardeen Endowed Chair in Electrical and Computer Engineering and Physics. Early life and career Nick Holonyak Jr. was born in Zeigler, Illinois, on November 3, 1928. His parents were Rusyn immigrants. His father worked in a coal mine. Holonyak was the first member of his family to receive any type of formal schooling. He once worked 30 straight hours on the Illinois Central Railroad before realizing that a life of hard labor was not what he wanted and he would prefer to go to school instead. According to a Chicago Tribune article in 2003, "The cheap and reliable semiconductor lasers critical to DVD players, bar code readers and scores of other devices owe their existence in some small way to the demanding workload thrust upon Downstate railroad crews decades ago." Holonyak earned his bachelor's (1950), master's (1951), and doctoral (1954) degrees in electrical engineering from the University of Illinois at Urbana-Champaign. Holonyak was John Bardeen's first doctoral student there. In 1954, Holonyak went to Bell Telephone Laboratories, where he worked on silicon-based electronic devices. From 1955 to 1957 he served with the U.S. Army Signal Corps. From 1957 to 1963 he was a scientist at the General Electric Company's Advanced Semiconductor Laboratory near Syracuse, New York. Here he invented, fabricated, and demonstrated the first visible light laser diode on October 9, 1962. He grew crystals of the alloy GaAs0.
https://en.wikipedia.org/wiki/Radioactive%20tracer
A radioactive tracer, radiotracer, or radioactive label is a chemical compound in which one or more atoms have been replaced by a radionuclide so by virtue of its radioactive decay it can be used to explore the mechanism of chemical reactions by tracing the path that the radioisotope follows from reactants to products. Radiolabeling or radiotracing is thus the radioactive form of isotopic labeling. In biological contexts, use of radioisotope tracers are sometimes called radioisotope feeding experiments. Radioisotopes of hydrogen, carbon, phosphorus, sulfur, and iodine have been used extensively to trace the path of biochemical reactions. A radioactive tracer can also be used to track the distribution of a substance within a natural system such as a cell or tissue, or as a flow tracer to track fluid flow. Radioactive tracers are also used to determine the location of fractures created by hydraulic fracturing in natural gas production. Radioactive tracers form the basis of a variety of imaging systems, such as, PET scans, SPECT scans and technetium scans. Radiocarbon dating uses the naturally occurring carbon-14 isotope as an isotopic label. Methodology Isotopes of a chemical element differ only in the mass number. For example, the isotopes of hydrogen can be written as 1H, 2H and 3H, with the mass number superscripted to the left. When the atomic nucleus of an isotope is unstable, compounds containing this isotope are radioactive. Tritium is an example of a radioactive isotope. The principle behind the use of radioactive tracers is that an atom in a chemical compound is replaced by another atom, of the same chemical element. The substituting atom, however, is a radioactive isotope. This process is often called radioactive labeling. The power of the technique is due to the fact that radioactive decay is much more energetic than chemical reactions. Therefore, the radioactive isotope can be present in low concentration and its presence detected by sensitive radiation
https://en.wikipedia.org/wiki/%CE%A0-calculus
In theoretical computer science, the -calculus (or pi-calculus) is a process calculus. The -calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation. The -calculus has few terms and is a small, yet expressive language (see ). Functional programs can be encoded into the -calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the -calculus, such as the spi calculus and applied , have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the -calculus has also been used to reason about business processes and molecular biology. Informal definition The -calculus belongs to the family of process calculi, mathematical formalisms for describing and analyzing properties of concurrent computation. In fact, the -calculus, like the λ-calculus, is so minimal that it does not contain primitives such as numbers, booleans, data structures, variables, functions, or even the usual control flow statements (such as if-then-else, while). Process constructs Central to the -calculus is the notion of name. The simplicity of the calculus lies in the dual role that names play as communication channels and variables. The process constructs available in the calculus are the following (a precise definition is given in the following section): concurrency, written , where and are two processes or threads executed concurrently. communication, where input prefixing is a process waiting for a message that was sent on a communication channel named before proceeding as binding the name received to the name Typically, this models either a process expecting a communication from the network or a label c usable only once by a goto c operation. output prefixing describes that the name is emitted on channe
https://en.wikipedia.org/wiki/JCSP
JCSP is an implementation of communicating sequential processes (CSP) for the programming language Java. Although CSP is a mathematical system, JCSP does not require in-depth mathematical skill, allowing instead that programmers can achieve well-behaved software by following simple rules. Overview There are four ways in which multi-threaded programs can fail untestably: Race conditions – shared variables may have indeterminate state because several threads access them concurrently without sufficient locking Deadlock – two or more threads reach a stalemate when they try to acquire locks or other resources in a conflicting way Livelock – similar to deadlock but resulting in endless waste of CPU time Starvation – one or more threads do no work, compromising the intended outcome of the software algorithms Generally, it is not possible to prove the absence of these four hazards merely by rigorous testing. Although rigorous testing is necessary, it is not sufficient. Instead it is necessary to have a design that can demonstrate these four hazards don't exist. CSP allows this to be done using mathematics and JCSP allows it to be done pragmatically in Java programs. The benefit of the basis in mathematics is that stronger guarantees of correct behaviour can be produced than would be possible with conventional ad hoc development. Fortunately, JCSP does not force its users to adopt a mathematical approach themselves, but allows them to benefit from the mathematics that underpins the library. Note that the CSP term process is used essentially as a synonym for thread in Java parlance; a process in CSP is a lightweight unit of execution that interacts with the outside world via events and is an active component that encapsulates the data structures on which it operates. Because the encapsulation of data is per-thread (per process in CSP parlance), there is typically no reliance on sharing data between threads. Instead, the coupling between threads happens via well-defined
https://en.wikipedia.org/wiki/Combinatorial%20optimization
Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead. Combinatorial optimization is related to operations research, algorithm theory, and computational complexity theory. It has important applications in several fields, including artificial intelligence, machine learning, auction theory, software engineering, VLSI, applied mathematics and theoretical computer science. Some research literature considers discrete optimization to consist of integer programming together with combinatorial optimization (which in a turn is composed of optimization problems dealing with graph structures), although all of these topics have closely intertwined research literature. It often involves determining the way to efficiently allocate resources used to find solutions to mathematical problems. Applications Applications of combinatorial optimization include, but are not limited to: Logistics Supply chain optimization Developing the best airline network of spokes and destinations Deciding which taxis in a fleet to route to pick up fares Determining the optimal way to deliver packages Allocating jobs to people optimally Designing water distribution networks Earth science problems (e.g. reservoir flow-rates) Methods There is a large amount of literature on polynomial-time algorithms for certain special classes of discrete optimization. A considerable amount of it is unified by the theory of linear programming. Some
https://en.wikipedia.org/wiki/Discrete%20optimization
Discrete optimization is a branch of optimization in applied mathematics and computer science. Scope As opposed to continuous optimization, some or all of the variables used in a discrete optimization problem are restricted to be discrete variables—that is, to assume only a discrete set of values, such as the integers. Branches Three notable branches of discrete optimization are: combinatorial optimization, which refers to problems on graphs, matroids and other discrete structures integer programming constraint programming These branches are all closely intertwined however, since many combinatorial optimization problems can be modeled as integer programs (e.g. shortest path) or constraint programs, any constraint program can be formulated as an integer program and vice versa, and constraint and integer programs can often be given a combinatorial interpretation. See also Diophantine equation References Mathematical optimization
https://en.wikipedia.org/wiki/RELX
RELX plc (pronounced "Rel-ex") is a British multinational information and analytics company headquartered in London, England. Its businesses provide scientific, technical and medical information and analytics; legal information and analytics; decision-making tools; and organise exhibitions. It operates in 40 countries and serves customers in over 180 nations. It was previously known as Reed Elsevier, and came into being in 1993 as a result of the merger of Reed International, a British trade book and magazine publisher, and Elsevier, a Netherlands-based scientific publisher. The company is publicly listed, with shares traded on the London Stock Exchange, Amsterdam Stock Exchange and New York Stock Exchange (ticker symbols: London: REL, Amsterdam: REN, New York: RELX). The company is one of the constituents of the FTSE 100 Index, AEX Index, Financial Times Global 500 and Euronext 100 Index. History The company, which was previously known as Reed Elsevier, came into being in 1993, as a result of the merger of Reed International, a British trade book and magazine publisher, and Elsevier, a Netherlands-based scientific publisher. The company re-branded itself as RELX in February 2015. Reed International In 1895, Albert E. Reed established a newsprint manufacturing operation at Tovil Mill near Maidstone, Kent. The Reed family were Methodists and encouraged good working conditions for their staff in the then-dangerous print trade. In 1965, Reed Group, as it was then known, became a conglomerate, creating its Decorative Products Division with the purchase of Crown Paints, Polycell and Sanderson's wallpaper and DIY decorating interests. In 1970, Reed Group merged with the International Publishing Corporation and the company name was changed to Reed International Limited. The company continued to grow by merging with other publishers and produced high quality trade journals as IPC Business Press Ltd and women's and other consumer magazines as IPC magazines Ltd. Reed ent
https://en.wikipedia.org/wiki/Continuous%20optimization
Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of real values between which there are no gaps (values from intervals of the real line). Because of this continuity assumption, continuous optimization allows the use of calculus techniques. References Mathematical optimization
https://en.wikipedia.org/wiki/Pro%20Tools
Pro Tools is a digital audio workstation (DAW) developed and released by Avid Technology (formerly Digidesign) for Microsoft Windows and macOS. It is used for music creation and production, sound for picture (sound design, audio post-production and mixing) and, more generally, sound recording, editing, and mastering processes. Pro Tools operates both as standalone software and in conjunction with a range of external analog-to-digital converters and PCIe cards with on-board digital signal processors (DSP). The DSP is used to provide additional processing power to the host computer for processing real-time effects, such as reverb, equalization, and compression and to obtain lower latency audio performance. Like all digital audio workstation software, Pro Tools can perform the functions of a multitrack tape recorder and a mixing console along with additional features that can only be performed in the digital domain, such as non-linear and non-destructive editing (most of audio handling is done without overwriting the source files), track compositing with multiple playlists, time compression and expansion, pitch shifting, and faster-than-real-time mixdown. Audio, MIDI, and video tracks are graphically represented on a timeline. Audio effects, virtual instruments, and hardware emulators—such as microphone preamps or guitar amplifiers—can be added, adjusted, and processed in real-time in a virtual mixer. 16-bit, 24-bit, and 32-bit float audio bit depths at sample rates up to 192 kHz are supported. Pro Tools supports mixed bit depths and audio formats in a session: BWF/WAV (including WAVE Extensible, RF64 and BW64) and AIFF. It imports and exports MOV video files and ADM BWF files (audio files with Dolby Atmos metadata); it also imports MXF, ACID and REX files and the lossy formats MP3, AAC, M4A, and audio from video files (MOV, MP4, M4V). The legacy SDII format was dropped with Pro Tools 10, although SDII conversion is still possible on macOS. Pro Tools has incorporate
https://en.wikipedia.org/wiki/Advanced%20multi-mission%20operations%20system
The advanced multi-mission operations system (AMMOS) is a common set of services and tools created by the Interplanetary Network Directorate, a division of the Jet Propulsion Laboratory, for use in JPL's operation of spacecraft. These tools include a means by which mission planning and analysis can be undertaken, as well as developing pre-planned command sequences for the spacecraft. AMMOS also provides a means by which downlinked data can be displayed and manipulated, including key mission telemetry such as readings of temperature, pressure, power, and other critical indicators. This common toolset allows space missions to minimize the cost of developing operations infrastructure, which is very important in light of recent restricted spending by space agencies. References External Links Official website Aerospace engineering
https://en.wikipedia.org/wiki/Incidence%20matrix
In mathematics, an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation. If the first class is X and the second is Y, the matrix has one row for each element of X and one column for each element of Y. The entry in row x and column y is 1 if x and y are related (called incident in this context) and 0 if they are not. There are variations; see below. Graph theory Incidence matrix is a common graph representation in graph theory. It is different to an adjacency matrix, which encodes the relation of vertex-vertex pairs. Undirected and directed graphs In graph theory an undirected graph has two kinds of incidence matrices: unoriented and oriented. The unoriented incidence matrix (or simply incidence matrix) of an undirected graph is a matrix B, where n and m are the numbers of vertices and edges respectively, such that For example, the incidence matrix of the undirected graph shown on the right is a matrix consisting of 4 rows (corresponding to the four vertices, 1–4) and 4 columns (corresponding to the four edges, ): If we look at the incidence matrix, we see that the sum of each column is equal to 2. This is because each edge has a vertex connected to each end. The incidence matrix of a directed graph is a matrix B where n and m are the number of vertices and edges respectively, such that (Many authors use the opposite sign convention.) The oriented incidence matrix of an undirected graph is the incidence matrix, in the sense of directed graphs, of any orientation of the graph. That is, in the column of edge e, there is one 1 in the row corresponding to one vertex of e and one −1 in the row corresponding to the other vertex of e, and all other rows have 0. The oriented incidence matrix is unique up to negation of any of the columns, since negating the entries of a column corresponds to reversing the orientation of an edge. The unoriented incidence matrix of a graph G is related
https://en.wikipedia.org/wiki/IPX/SPX
IPX/SPX stands for Internetwork Packet Exchange/Sequenced Packet Exchange. IPX and SPX are networking protocols used initially on networks using the (since discontinued) Novell NetWare operating systems. They also became widely used on networks deploying Microsoft Windows LANS, as they replaced NetWare LANS, but are no longer widely used. IPX/SPX was also widely used prior to and up to Windows XP, which supported the protocols, while later Windows versions do not, and TCP/IP took over for networking. Protocol layers IPX and SPX are derived from Xerox Network Systems' IDP and SPP protocols respectively. IPX is a network-layer protocol (layer 3 of the OSI model), while SPX is a transport-layer protocol (layer 4 of the OSI model). The SPX layer sits on top of the IPX layer and provides connection-oriented services between two nodes on the network. SPX is used primarily by client–server applications. IPX and SPX both provide connection services similar to TCP/IP, with the IPX protocol having similarities to Internet Protocol, and SPX having similarities to TCP. IPX/SPX was primarily designed for local area networks (LANs) and is a very efficient protocol for this purpose (typically SPX's performance exceeds that of TCP on a small LAN, as in place of congestion windows and confirmatory acknowledgements, SPX uses simple NAKs). TCP/IP has, however, become the de facto standard protocol. This is in part due to its superior performance over wide area networks and the Internet (which uses IP exclusively), and also because TCP/IP is a more mature protocol, designed specifically with this purpose in mind. Despite the protocols' association with NetWare, they are neither required for NetWare communication (as of NetWare 5.x), nor exclusively used on NetWare networks. NetWare communication requires an NCP implementation, which can use IPX/SPX, TCP/IP, or both, as a transport. Implementations Novell was largely responsible for the use of IPX as a popular computer networking pr
https://en.wikipedia.org/wiki/Skolem%20normal%20form
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers. Every first-order formula may be converted into Skolem normal form while not changing its satisfiability via a process called Skolemization (sometimes spelled Skolemnization). The resulting formula is not necessarily equivalent to the original one, but is equisatisfiable with it: it is satisfiable if and only if the original one is satisfiable. Reduction to Skolem normal form is a method for removing existential quantifiers from formal logic statements, often performed as the first step in an automated theorem prover. Examples The simplest form of Skolemization is for existentially quantified variables that are not inside the scope of a universal quantifier. These may be replaced simply by creating new constants. For example, may be changed to , where is a new constant (does not occur anywhere else in the formula). More generally, Skolemization is performed by replacing every existentially quantified variable with a term whose function symbol is new. The variables of this term are as follows. If the formula is in prenex normal form, then are the variables that are universally quantified and whose quantifiers precede that of . In general, they are the variables that are quantified universally (we assume we get rid of existential quantifiers in order, so all existential quantifiers before have been removed) and such that occurs in the scope of their quantifiers. The function introduced in this process is called a Skolem function (or Skolem constant if it is of zero arity) and the term is called a Skolem term. As an example, the formula is not in Skolem normal form because it contains the existential quantifier . Skolemization replaces with , where is a new function symbol, and removes the quantification over The resulting formula is . The Skolem term contains , but not , because the quantifier
https://en.wikipedia.org/wiki/Ratio%20test
In mathematics, the ratio test is a test (or "criterion") for the convergence of a series where each term is a real or complex number and is nonzero when is large. The test was first published by Jean le Rond d'Alembert and is sometimes known as d'Alembert's ratio test or as the Cauchy ratio test. The test The usual form of the test makes use of the limit The ratio test states that: if L < 1 then the series converges absolutely; if L > 1 then the series diverges; if L = 1 or the limit fails to exist, then the test is inconclusive, because there exist both convergent and divergent series that satisfy this case. It is possible to make the ratio test applicable to certain cases where the limit L fails to exist, if limit superior and limit inferior are used. The test criteria can also be refined so that the test is sometimes conclusive even when L = 1. More specifically, let . Then the ratio test states that: if R < 1, the series converges absolutely; if r > 1, the series diverges; or equivalently if for all large n (regardless of the value of r), the series also diverges; this is because is nonzero and increasing and hence does not approach zero; the test is otherwise inconclusive. If the limit L in () exists, we must have L = R = r. So the original ratio test is a weaker version of the refined one. Examples Convergent because L < 1 Consider the series Applying the ratio test, one computes the limit Since this limit is less than 1, the series converges. Divergent because L > 1 Consider the series Putting this into the ratio test: Thus the series diverges. Inconclusive because L = 1 Consider the three series The first series (1 + 1 + 1 + 1 + ⋯) diverges, the second one (the one central to the Basel problem) converges absolutely and the third one (the alternating harmonic series) converges conditionally. However, the term-by-term magnitude ratios of the three series are respectively       and   . So, in all three cases, one has that the
https://en.wikipedia.org/wiki/Non-analytic%20smooth%20function
In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below. One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers, which are important in theories of generalized functions, such as Laurent Schwartz's theory of distributions. The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry. In terms of sheaf theory, this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine, in contrast with the analytic case. The functions below are generally used to build up partitions of unity on differentiable manifolds. An example function Definition of the function Consider the function defined for every real number x. The function is smooth The function f has continuous derivatives of all orders at every point x of the real line. The formula for these derivatives is where pn(x) is a polynomial of degree n − 1 given recursively by p1(x) = 1 and for any positive integer n. From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit for any nonnegative integer m. By the power series representation of the exponential function, we have for every natural number (including zero) because all the positive terms for are added. Therefore, dividing this inequality by and taking the limit from above, We now prove the formula for the nth derivative of f by mathematical induction. Using the chain rule, the reciprocal rule, and the fact that the derivative of the exponential function is again the exponential function, we see that the formula is correct for the first deri
https://en.wikipedia.org/wiki/Tiki%20Wiki%20CMS%20Groupware
Tiki Wiki CMS Groupware or simply Tiki, originally known as TikiWiki, is a free and open source Wiki-based content management system and online office suite written primarily in PHP and distributed under the GNU Lesser General Public License (LGPL-2.1-only) license. In addition to enabling websites and portals on the internet and on intranets and extranets, Tiki contains a number of collaboration features allowing it to operate as a Geospatial Content Management System (GeoCMS) and Groupware web application. Tiki includes all the basic features common to most CMSs such as the ability to register and maintain individual user accounts within a flexible and rich permission / privilege system, create and manage menus, RSS-feeds, customize page layout, perform logging, and administer the system. All administration tasks are accomplished through a browser-based user interface. Tiki features an all-in-one design, as opposed to a core+extensions model followed by other CMSs. This allows for future-proof upgrades (since all features are released together), but has the drawback of an extremely large codebase (more than 1,000,000 lines). Tiki can run on any computing platform that supports both a web server capable of running PHP 5 (including Apache HTTP Server, IIS, Lighttpd, Hiawatha, Cherokee, and nginx) and a MySQL database to store content and settings. Major components Tiki has four major categories of components: content creation and management tools, content organization tools and navigation aids, communication tools, and configuration and administration tools. These components enable administrators and users to create and manage content, as well as letting them communicate to others and configure sites. In addition, Tiki allows each user to choose from various visual themes. These themes are implemented using CSS and the open source Smarty template engine. Additional themes can be created by a Tiki administrator for branding or customization as well. Internatio
https://en.wikipedia.org/wiki/Permutable%20prime
A permutable prime, also known as anagrammatic prime, is a prime number which, in a given base, can have its digits' positions switched through any permutation and still be a prime number. H. E. Richert, who is supposedly the first to study these primes, called them permutable primes, but later they were also called absolute primes. Base 2 In base 2, only repunits can be permutable primes, because any 0 permuted to the ones place results in an even number. Therefore, the base 2 permutable primes are the Mersenne primes. The generalization can safely be made that for any positional number system, permutable primes with more than one digit can only have digits that are coprime with the radix of the number system. One-digit primes, meaning any prime below the radix, are always trivially permutable. Base 10 In base 10, all the permutable primes with fewer than 49,081 digits are known 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199, 311, 337, 373, 733, 919, 991, R19 (1111111111111111111), R23, R317, R1031, R49081, ... Of the above, there are 16 unique permutation sets, with smallest elements 2, 3, 5, 7, R2, 13, 17, 37, 79, 113, 199, 337, R19, R23, R317, R1031, ... Note Rn := is a repunit, a number consisting only of n ones (in base 10). Any repunit prime is a permutable prime with the above definition, but some definitions require at least two distinct digits. All permutable primes of two or more digits are composed from the digits 1, 3, 7, 9, because no prime number except 2 is even, and no prime number besides 5 is divisible by 5. It is proven that no permutable prime exists which contains three different of the four digits 1, 3, 7, 9, as well as that there exists no permutable prime composed of two or more of each of two digits selected from 1, 3, 7, 9. There is no n-digit permutable prime for 3 < n < 6·10175 which is not a repunit. It is conjectured that there are no non-repunit permutable primes other than the eighteen listed above. They ca
https://en.wikipedia.org/wiki/130%20%28number%29
130 (one hundred [and] thirty) is the natural number following 129 and preceding 131. In mathematics 130 is a sphenic number. It is a noncototient since there is no answer to the equation x - φ(x) = 130. 130 is the only integer that is the sum of the squares of its first four divisors, including 1: 12 + 22 + 52 + 102 = 130. 130 is the largest number that cannot be written as the sum of four hexagonal numbers. 130 equals both 27 + 2 and 53 + 5 and is therefore a doubly strictly number. In religion The Book of Genesis states Adam had Seth at the age of 130. The Second Book of Chronicles says that Jehoiada died at the age of 130. In other fields One hundred [and] thirty is also: The year AD 130 or 130 BC The 130 nanometer process is a semiconductor process technology by semiconductor companies A 130-30 fund or a ratio up to 150/50 is a type of collective investment vehicle The C130 Hercules aircraft References See also List of highways numbered 130 United Nations Security Council Resolution 130 130 Liberty Street, New York City Integers
https://en.wikipedia.org/wiki/140%20%28number%29
140 (one hundred [and] forty) is the natural number following 139 and preceding 141. In mathematics 140 is an abundant number and a harmonic divisor number. It is the sum of the squares of the first seven integers, which makes it a square pyramidal number. 140 is an odious number because it has an odd number of ones in its binary representation. The sum of Euler's totient function φ(x) over the first twenty-one integers is 140. 140 is a repdigit in bases 13, 19, 27, 34, 69, and 139. In other fields 140 is also: The number of varieties of ashes from different varieties of pipe, cigar, and cigarette tobacco included in the Sherlock Holmes monograph. The former Twitter entry-character limit, a well-known characteristic of the service (based on the text messaging limit) A film, based on the Twitter entry-character limit, created and edited by Frank Kelly of Ireland The age at which Job died The atomic number of unquadnilium, a temporary chemical element PRO 140 antibody found on T lymphocytes of the human immune system Telephone directory assistance in Egypt A video game developed by Jeppe Carlsen The BPM (tempo) of the music genre Dubstep See also List of highways numbered 140 United Nations Security Council Resolution 140 United States Supreme Court cases, Volume 140 References External links The Natural Number 140 The Number 140 at The Database of Number Correlations Integers
https://en.wikipedia.org/wiki/150%20%28number%29
150 (one hundred [and] fifty) is the natural number following 149 and preceding 151. In mathematics 150 is the sum of eight consecutive primes (7 + 11 + 13 + 17 + 19 + 23 + 29 + 31). Given 150, the Mertens function returns 0. 150 is conjectured to be the only minimal difference greater than 1 of any increasing arithmetic progression of n primes (in this case, n = 7) that is not a primorial (a product of the first m primes). The sum of Euler's totient function φ(x) over the first twenty-two integers is 150. 150 is a Harshad number and an abundant number. 150 degrees is the measure of the internal angle of a regular dodecagon. In the Bible The last numbered Psalm in the Bible, Psalm 150, considered the one most often set to music. The number of sons of Ulam, who were combat archers, in the Census of the men of Israel upon return from exile (I Chronicles 8:40) In the Book of Genesis, the number of days the waters from the Great Flood persisted on the Earth before subsiding. Manuscripts Uncial 0150 Minuscule 150 Lectionary 150 In sports In Round 20 of the 2011 AFL season, inflicted the worst ever defeat on the Gold Coast Suns by 150 points. In other fields 150 is also: The number of degrees in the quincunx astrological aspect explored by Johannes Kepler. The approximate value for Dunbar's number, a theoretical value with implications in sociology and anthropology The total number of Power Stars in Super Mario 64 DS for the Nintendo DS. The total number of dragon eggs in Spyro: Year of the Dragon. See also List of highways numbered 150 United Nations Security Council Resolution 150 United States Supreme Court cases, Volume 150 References Integers
https://en.wikipedia.org/wiki/Optical%20disc%20authoring
Optical disc authoring, including CD, DVD, and Blu-ray Disc authoring, is the process of assembling source material—video, audio or other data—into the proper logical volume format to then be recorded ("burned") onto an optical disc (typically a compact disc or DVD). This act is sometimes done illegally, by pirating copyrighted material without permission from the original artists. Process To burn an optical disc, one usually first creates an optical disc image with a full file system, of a type designed for the optical disc, in temporary storage such as a file in another file system on a disk drive. One may test the image on target devices using rewriteable media such as CD-RW, DVD±RW and BD-RE. Then, one copies the image to the disc (usually write-once media for hard distribution). Most optical disc authoring utilities create a disc image and copy it to the disc in one bundled operation, so that end-users often do not know the distinction between creating and burning. However, it is useful to know because creating the disc image is a time-consuming process, while copying the image is much faster. Most disc burning applications silently delete the image from the Temporary folder after making one copy. If users override this default, telling the application to preserve the image, they can reuse the image to create more copies. Otherwise, they must rebuild the image each time they want a copy. Some packet-writing applications do not require writing the entire disc at once, but allow writing of different parts at different times. This allows a user to construct a disc incrementally, as it could be on a rewritable medium like a floppy disk or rewritable CD. However, if the disc is non-rewritable, a given bit can be written only once. Due to this limitation, a non-rewritable disc whose burn failed for any reason cannot be repaired. (Such a disc is colloquially termed a "coaster", a reference to a beverage coaster.) There are many optical disc authoring technologi
https://en.wikipedia.org/wiki/Wired%20Equivalent%20Privacy
Wired Equivalent Privacy (WEP) was a severely flawed security algorithm for 802.11 wireless networks. Introduced as part of the original IEEE 802.11 standard ratified in 1997, its intention was to provide data confidentiality comparable to that of a traditional wired network. WEP, recognizable by its key of 10 or 26 hexadecimal digits (40 or 104 bits), was at one time widely used, and was often the first security choice presented to users by router configuration tools. Subsequent to a 2001 disclosure of a severe design flaw in the algorithm, WEP was never again secure in practice. In the vast majority of cases, Wi-Fi hardware devices relying on WEP security could not be upgraded to secure operation. Some of the design flaws were addressed in WEP2, but WEP2 also proved insecure, and another generation of hardware could not be upgraded to secure operation. In 2003, the Wi-Fi Alliance announced that WEP and WEP2 had been superseded by Wi-Fi Protected Access (WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated. WPA retained some design characteristics of WEP that remained problematic. WEP was the only encryption protocol available to 802.11a and 802.11b devices built before the WPA standard, which was available for 802.11g devices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in. History WEP was ratified as a Wi-Fi security standard in 1999. The first versions of WEP were not particularly strong, even for the time they were released, due to U.S. restrictions on the export of various cryptographic technologies. These restrictions led to manufacturers restricting their devices to only 64-bit encryption. When the restrictions were lifted, the encryption was increased to 128 bits. Despite the introduction of 256-bit WEP, 128-bit remains one of the most common implementations. Encryption det
https://en.wikipedia.org/wiki/Wi-Fi%20Protected%20Access
Wi-Fi Protected Access (WPA), Wi-Fi Protected Access 2 (WPA2), and Wi-Fi Protected Access 3 (WPA3) are the three security certification programs developed after 2000 by the Wi-Fi Alliance to secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system, Wired Equivalent Privacy (WEP). WPA (sometimes referred to as the TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004) standard. In January 2018, the Wi-Fi Alliance announced the release of WPA3, which has several security improvements over WPA2. As of 2023, most computers that connect to a wireless network have support for using WPA, WPA2, or WPA3. Versions WPA The Wi-Fi Alliance intended WPA as an intermediate measure to take the place of WEP pending the availability of the full IEEE 802.11i standard. WPA could be implemented through firmware upgrades on wireless network interface cards designed for WEP that began shipping as far back as 1999. However, since the changes required in the wireless access points (APs) were more extensive than those needed on the network cards, most pre-2003 APs could not be upgraded to support WPA. The WPA protocol implements the Temporal Key Integrity Protocol (TKIP). WEP used a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromised WEP. WPA also includes a Message Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces the cyclic redundancy check (CRC) that was used by the WEP standard. CRC's main fl
https://en.wikipedia.org/wiki/Monolithic%20microwave%20integrated%20circuit
Monolithic microwave integrated circuit, or MMIC (sometimes pronounced "mimic"), is a type of integrated circuit (IC) device that operates at microwave frequencies (300 MHz to 300 GHz). These devices typically perform functions such as microwave mixing, power amplification, low-noise amplification, and high-frequency switching. Inputs and outputs on MMIC devices are frequently matched to a characteristic impedance of 50 ohms. This makes them easier to use, as cascading of MMICs does not then require an external matching network. Additionally, most microwave test equipment is designed to operate in a 50-ohm environment. MMICs are dimensionally small (from around 1 mm² to 10 mm²) and can be mass-produced, which has allowed the proliferation of high-frequency devices such as cellular phones. MMICs were originally fabricated using gallium arsenide (GaAs), a III-V compound semiconductor. It has two fundamental advantages over silicon (Si), the traditional material for IC realisation: device (transistor) speed and a semi-insulating substrate. Both factors help with the design of high-frequency circuit functions. However, the speed of Si-based technologies has gradually increased as transistor feature sizes have reduced, and MMICs can now also be fabricated in Si technology. The primary advantage of Si technology is its lower fabrication cost compared with GaAs. Silicon wafer diameters are larger (typically 8" to 12" compared with 4" to 8" for GaAs) and the wafer costs are lower, contributing to a less expensive IC. Originally, MMICs used metal-semiconductor field-effect transistors (MESFETs) as the active device. More recently high-electron-mobility transistor (HEMTs), pseudomorphic HEMTs and heterojunction bipolar transistors have become common. Other III-V technologies, such as indium phosphide (InP), have been shown to offer superior performance to GaAs in terms of gain, higher cutoff frequency, and low noise. However, they also tend to be more expensive due to smal
https://en.wikipedia.org/wiki/Mass%E2%80%93energy%20equivalence
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula. The formula defines the energy of a particle in its rest frame as the product of mass () with the speed of light squared (). Because the speed of light is a large number in everyday units (approximately ), the formula implies that a small amount of "rest mass", measured when the system is at rest, corresponds to an enormous amount of energy, which is independent of the composition of the matter. Rest mass, also called invariant mass, is a fundamental physical property that is independent of momentum, even at extreme speeds approaching the speed of light. Its value is the same in all inertial frames of reference. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when energy is lost in chemical reactions, nuclear reactions, and other energy transformations, the system will also lose a corresponding amount of mass. The energy, and mass, can be released to the environment as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics. Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. Th
https://en.wikipedia.org/wiki/XScreenSaver
XScreenSaver is a free and open-source collection of 240+ screensavers for Unix, macOS, iOS and Android operating systems. It was created by Jamie Zawinski in 1992 and is still maintained by him, with new releases coming out several times a year. Platforms The free software and open-source Unix-like operating systems running the X Window System (such as Linux and FreeBSD) use XScreenSaver almost exclusively. On those systems, there are several packages: one for the screen-saving and locking framework, and two or more for the display modes, divided somewhat arbitrarily. On Macintosh systems, XScreenSaver works with the built-in macOS screen saver. On iOS systems, XScreenSaver is a stand-alone app that can run any of the hacks full-screen. On Android systems, the XScreenSaver display modes work either as normal screen savers (which Android sometimes refers to as "Daydreams") or as live wallpapers. There is no official version for Microsoft Windows, and the developer discourages anyone from porting it. The author considers Microsoft to be "a company with vicious, predatory, anti-competitive business practices" and says that, as one of the original authors of Netscape Navigator, he holds a "personal grudge" against Microsoft because of its behavior during the First Browser War. Software Architecture The XScreenSaver daemon is responsible for detecting idle-ness, blanking and locking the screen, and launching the display modes. The display modes (termed "hacks" from the historical usage "display hack") are each stand-alone programs. This is an important security feature, in that the display modes are sandboxed into a separate process from the screen locking framework. This means that a programming error in one of the graphical display modes cannot compromise the screen locker itself (e.g., a crash in a display mode will not unlock the screen). It also means that a third-party screen saver can be written in any language or with any graphics library, so long as i
https://en.wikipedia.org/wiki/Secure%20Digital%20Music%20Initiative
Secure Digital Music Initiative (SDMI) was a forum formed in late 1998, composed of more than 200 IT, consumer electronics, security technology, ISP and recording industry companies, as well as authors, composers and publishing rightsholders (represented by CISAC and BIEM representatives, mainly from SGAE/SDAE (Gonzalo Mora Velarde and José Manuel Macarro), GEMA (Alexander Wolf und Thomas Kummer-Hardt), SACEM/SDRM (Aline Jelen, Catherine Champarnaud, Laurent Lemasson), MCPS/PRS (Mark Isherwood), ASCAP , BMI (Edward Oshanani), and SODRAC), ostensibly with the purpose of developing technology and rights management systems specifications that will protect once developed and installed, the playing, storing, distributing and performing of digital music. Specifically, the goals of the SDMI were to provide consumers with convenient access to music online and in new digital distribution systems, to apply digital rights management restrictions to the work of artists, and to promote the development of new music-related business and technologies. SDMI was a direct response to the widespread success of the MP3 file format. According to their web site, SDMI existed to develop “technology specifications that protect the playing, storing, and distributing of digital music such that a new market for digital music may emerge.” It would have been used by DataPlay, an optical disc format that at the time was cheaper and had higher capacity than memory cards, and by SD cards. Method The strategy for the SDMI group involved two stages. The first was to implement a secure digital watermarking scheme. This would allow music to be tagged with a secure watermark that was hard to remove from the source audio without damaging it. The second was to ensure that SDMI-compliant players would not play SDMI tagged music that was not authorized for that device. The reasoning was that even if the files were distributed they could not be played as the device would detect the music was not authoriz
https://en.wikipedia.org/wiki/Decomposer
Decomposers are organisms that break down dead or decaying organisms; they carry out decomposition, a process possible by only certain kingdoms, such as fungi. Like herbivores and predators, decomposers are heterotrophic, meaning that they use organic substrates to get their energy, carbon and nutrients for growth and development. While the terms decomposer and detritivore are often interchangeably used, detritivores ingest and digest dead matter internally, while decomposers directly absorb nutrients through external chemical and biological processes. Thus, invertebrates such as earthworms, woodlice, and sea cucumbers are technically detritivores, not decomposers, since they are unable to absorb nutrients without ingesting them. Fungi The primary decomposer of litter in many ecosystems is fungi. Unlike bacteria, which are unicellular organisms and are decomposers as well, most saprotrophic fungi grow as a branching network of hyphae. While bacteria are restricted to growing and feeding on the exposed surfaces of organic matter, fungi can use their hyphae to penetrate larger pieces of organic matter, below the surface. Additionally, only wood-decay fungi have evolved the enzymes necessary to decompose lignin, a chemically complex substance found in wood. These two factors make fungi the primary decomposers in forests, where litter has high concentrations of lignin and often occurs in large pieces. Fungi decompose organic matter by releasing enzymes to break down the decaying material, after which they absorb the nutrients in the decaying material. Hyphae are used to break down matter and absorb nutrients and are also used in reproduction. When two compatible fungi hyphae grow close to each other, they will then fuse together for reproduction, and form another fungus. See also Chemotroph Micro-animals Microorganism References Further reading Hunt HW, Coleman DC, Ingham ER, Ingham RE, Elliot ET, Moore JC, Rose SL, Reid CPP, Morley CR (1987) "The detrital foo
https://en.wikipedia.org/wiki/Equivariant%20map
In mathematics, equivariance is a form of symmetry for functions from one space with symmetry to another (such as symmetric spaces). A function is said to be an equivariant map when its domain and codomain are acted on by the same symmetry group, and when the function commutes with the action of the group. That is, applying a symmetry transformation and then computing the function produces the same result as computing the function and then applying the transformation. Equivariant maps generalize the concept of invariants, functions whose value is unchanged by a symmetry transformation of their argument. The value of an equivariant map is often (imprecisely) called an invariant. In statistical inference, equivariance under statistical transformations of data is an important property of various estimation methods; see invariant estimator for details. In pure mathematics, equivariance is a central object of study in equivariant topology and its subtopics equivariant cohomology and equivariant stable homotopy theory. Examples Elementary geometry In the geometry of triangles, the area and perimeter of a triangle are invariants: translating or rotating a triangle does not change its area or perimeter. However, triangle centers such as the centroid, circumcenter, incenter and orthocenter are not invariant, because moving a triangle will also cause its centers to move. Instead, these centers are equivariant: applying any Euclidean congruence (a combination of a translation and rotation) to a triangle, and then constructing its center, produces the same point as constructing the center first, and then applying the same congruence to the center. More generally, all triangle centers are also equivariant under similarity transformations (combinations of translation, rotation, and scaling), and the centroid is equivariant under affine transformations. The same function may be an invariant for one group of symmetries and equivariant for a different group of symmetries. For
https://en.wikipedia.org/wiki/Primorial
In mathematics, and more particularly in number theory, primorial, denoted by "#", is a function from natural numbers to natural numbers similar to the factorial function, but rather than successively multiplying positive integers, the function only multiplies prime numbers. The name "primorial", coined by Harvey Dubner, draws an analogy to primes similar to the way the name "factorial" relates to factors. Definition for prime numbers For the th prime number , the primorial is defined as the product of the first primes: , where is the th prime number. For instance, signifies the product of the first 5 primes: The first five primorials are: 2, 6, 30, 210, 2310 . The sequence also includes as empty product. Asymptotically, primorials grow according to: where is Little O notation. Definition for natural numbers In general, for a positive integer , its primorial, , is the product of the primes that are not greater than ; that is, , where is the prime-counting function , which gives the number of primes ≤ . This is equivalent to: For example, 12# represents the product of those primes ≤ 12: Since , this can be calculated as: Consider the first 12 values of : 1, 2, 6, 6, 30, 30, 210, 210, 210, 210, 2310, 2310. We see that for composite every term simply duplicates the preceding term , as given in the definition. In the above example we have since 12 is a composite number. Primorials are related to the first Chebyshev function, written according to: Since asymptotically approaches for large values of , primorials therefore grow according to: The idea of multiplying all known primes occurs in some proofs of the infinitude of the prime numbers, where it is used to derive the existence of another prime. Characteristics Let and be two adjacent prime numbers. Given any , where : For the Primorial, the following approximation is known: . Notes: Using elementary methods, mathematician Denis Hanson showed that Using more advanced
https://en.wikipedia.org/wiki/Barn%20raising
A barn raising, also historically called a raising bee or rearing in the U.K., is a collective action of a community, in which a barn for one of the members is built or rebuilt collectively by members of the community. Barn raising was particularly common in 18th- and 19th-century rural North America. A barn was a necessary structure for any farmer, for example for storage of cereals and hay and keeping of animals. Yet a barn was also a large and costly structure, the assembly of which required more labor than a typical family could provide. Barn raising addressed the need by enlisting members of the community, unpaid, to assist in the building of their neighbors' barns. Because each member could ask others for help, reciprocation could eventually reasonably be presumed for each participant if the need were to arise. The tradition of "barn raising" continues, more or less unchanged, in some Amish and Old Order Mennonite communities, particularly in Ohio, Indiana, Pennsylvania, and some rural parts of Canada. The practice continues outside of these religious communities, albeit less frequently than in the 19th century. Most frames today are raised using a crane and small crew. Description A large amount of preparation is done before the one to two days a barn raising requires. Lumber and hardware are laid in, plans are made, ground is cleared, and tradesmen are hired. Materials are purchased or traded for by the family who will own the barn once it is complete. Generally, participation is mandatory for community members. These participants are not paid. All able-bodied members of the community are expected to attend. Failure to attend a barn raising without the best of reasons leads to censure within the community. Some specialists brought in from other communities for direction or joinery may be paid, however. One or more people with prior experience or with specific skills are chosen to lead the project. Older people who have participated in many barn raising
https://en.wikipedia.org/wiki/IBM%20RAD6000
The RAD6000 radiation-hardened single-board computer, based on the IBM RISC Single Chip CPU, was manufactured by IBM Federal Systems. IBM Federal Systems was sold to Loral, and by way of acquisition, ended up with Lockheed Martin and is currently a part of BAE Systems Electronic Systems. RAD6000 is mainly known as the onboard computer of numerous NASA spacecraft. History The radiation-hardening of the original RSC 1.1 million-transistor processor to make the RAD6000's CPU was done by IBM Federal Systems Division working with the Air Force Research Laboratory. , there are 200 RAD6000 processors in space on a variety of NASA, United States Department of Defense and commercial spacecraft, including: Mars Exploration Rovers (Spirit and Opportunity) Deep Space 1 probe Mars Polar Lander and Mars Climate Orbiter Mars Odyssey orbiter Spitzer Infrared Telescope Facility MESSENGER probe to Mercury STEREO Spacecraft IMAGE/Explorer 78 MIDEX spacecraft Genesis and Stardust sample return missions Phoenix Mars Polar Lander Dawn Mission to the asteroid belt using ion propulsion Solar Dynamics Observatory, Launched Feb 11, 2010 (flying both RAD6000 and RAD750) Burst Alert Telescope Image Processor on board the Swift Gamma-Ray Burst Mission DSCOVR Deep Space Climate Observatory spacecraft The computer has a maximum clock rate of 33 MHz and a processing speed of about 35 MIPS. In addition to the CPU itself, the RAD6000 has 128 MB of ECC RAM. A typical real-time operating system running on NASA's RAD6000 installations is VxWorks. The Flight boards in the above systems have switchable clock rates of 2.5, 5, 10, or 20 MHz. Reported to have a unit cost somewhere between US$200,000 and US$300,000, RAD6000 computers were released for sale in the general commercial market in 1996. The RAD6000's successor is the RAD750 processor, based on IBM's PowerPC 750. See also IBM RS/6000 PowerPC 601, a consumer chip with similar computing capabilities to the RAD6000 References
https://en.wikipedia.org/wiki/S1909/A2840
S1909/A2840 is a bill that was passed by the New Jersey Legislature in December 2003, and signed into law by Governor James McGreevey on January 4, 2004, that permits human cloning for the purpose of developing and harvesting human stem cells. Specifically, it legalizes the process of cloning a human embryo, and implanting the clone into a womb, provided that the clone is then aborted and used for medical research. The legislation was sponsored by Senators Richard Codey (D-Essex) and Barbara Buono, and Assembly members Neil M. Cohen (D-Union), John F. McKeon, Mims Hackett (D-Essex), and Joan M. Quigley (D-Hudson). The enactment of this law will enable researchers to find cures for debilitating and deadly diseases. Views regarding the legislation Supporters of the legislation hailed it as promoting medical progress through science, giving hope for the development of treatments for debilitating diseases such as Parkinson's disease, Alzheimer's disease, cancer, and diabetes. Assemblyman Neil Cohen lauded it as "not the most significant law we'll write this session—but this century." Paralyzed actor Christopher Reeve, who believed that such legislation may hasten the development of methods to reverse paralysis, testified in support of the bill. However, Congressmen Chris Smith, Mike Ferguson, and Scott Garrett assailed it, saying, "This legislation will launch New Jersey blindly into the vanguard of terrible human-rights violations and grisly human experimentation." They also claim that, in practice, once a clone is developing in a womb, there is nothing that will prevent it from leading to "the world's first human clone being born and starting a horrible new era of human history." New Jersey's Catholic bishops condemned the newly legalized process as violating "a central tenet of all civilized codes on human experimentation beginning with the Nuremberg Code...[It approves] doing deadly harm to a member of the human species solely for the sake of potential benefi
https://en.wikipedia.org/wiki/IDEF
IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation, object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain. The most-widely recognized and used components of the IDEF family are IDEF0, a functional modeling language building on SADT, and IDEF1X, which addresses information models and database design issues. Overview of IDEF methods IDEF refers to a family of modeling language, which cover a wide range of uses, from functional modeling to data, simulation, object-oriented analysis/design and knowledge acquisition. Eventually the IDEF methods have been defined up to IDEF14: IDEF0: Function modeling IDEF1: Information modeling IDEF1X: Data modeling IDEF2: Simulation model design IDEF3: Process description capture IDEF4: Object-oriented design IDEF5: Ontology description capture IDEF6: Design rationale capture IDEF7: Information system auditing IDEF8: User interface modeling IDEF9: Business constraint discovery IDEF10: Implementation architecture modeling IDEF11: Information artifact modeling IDEF12: Organization modeling IDEF13: Three-schema mapping design IDEF14: Network design In 1995 only the IDEF0, IDEF1X, IDEF2, IDEF3 and IDEF4 had been developed in full. Some of the other IDEF concepts had some preliminary design. Some of the last efforts were new IDEF developments in 1995 toward establishing reliable methods for business constraint discovery IDEF9, design rationale capture IDEF6, human system, interaction design IDEF8, and network design IDEF14. The methods IDEF7, IDEF10, IDEF11, IDEF 12 and IDEF13 haven't b
https://en.wikipedia.org/wiki/Meet-in-the-middle%20attack
The meet-in-the-middle attack (MITM), a known plaintext attack, is a generic space–time tradeoff cryptographic attack against encryption schemes that rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key (168-bit) can be brute-forced by an attacker with 256 space and 2112 operations. Description When trying to improve the security of a block cipher, a tempting idea is to encrypt the data several times using multiple keys. One might think this doubles or even n-tuples the security of the multiple-encryption scheme, depending on the number of times the data is encrypted, because an exhaustive search on all possible combinations of keys (simple brute-force) would take 2n·k attempts if the data is encrypted with k-bit keys n times. The MITM is a generic attack which weakens the security benefits of using multiple encryptions by storing intermediate values from the encryptions or decryptions and using those to improve the time required to brute force the decryption keys. This makes a Meet-in-the-Middle attack (MITM) a generic space–time tradeoff cryptographic attack. The MITM attack attempts to find the keys by using both the range (ciphertext) and domain (plaintext) of the composition of several functions (or block ciphers) such that the forward mapping through the first functions is the same as the backward mapping (inverse image) through the last functions, quite literally meeting in the middle of the composed function. For example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257 encryption and decryption operations. The multidimensional MITM (MD-MITM) uses a combination of several simultaneous MITM attacks like described above, where the meeting happens in multiple positions in the composed function. History Diffie and Hellman first proposed the meet-in-the-middle attack on a hypothetical expansion of a bloc
https://en.wikipedia.org/wiki/Sarcoplasmic%20reticulum
The sarcoplasmic reticulum (SR) is a membrane-bound structure found within muscle cells that is similar to the smooth endoplasmic reticulum in other cells. The main function of the SR is to store calcium ions (Ca2+). Calcium ion levels are kept relatively constant, with the concentration of calcium ions within a cell being 10,000 times smaller than the concentration of calcium ions outside the cell. This means that small increases in calcium ions within the cell are easily detected and can bring about important cellular changes (the calcium is said to be a second messenger). Calcium is used to make calcium carbonate (found in chalk) and calcium phosphate, two compounds that the body uses to make teeth and bones. This means that too much calcium within the cells can lead to hardening (calcification) of certain intracellular structures, including the mitochondria, leading to cell death. Therefore, it is vital that calcium ion levels are controlled tightly, and can be released into the cell when necessary and then removed from the cell. Structure The sarcoplasmic reticulum is a network of tubules that extend throughout muscle cells, wrapping around (but not in direct contact with) the myofibrils (contractile units of the cell). Cardiac and skeletal muscle cells contain structures called transverse tubules (T-tubules), which are extensions of the cell membrane that travel into the centre of the cell. T-tubules are closely associated with a specific region of the SR, known as the terminal cisternae in skeletal muscle, with a distance of roughly 12 nanometers, separating them. This is the primary site of calcium release. The longitudinal SR are thinner projects, that run between the terminal cisternae/junctional SR, and are the location where ion channels necessary for calcium ion absorption are most abundant. These processes are explained in more detail below and are fundamental for the process of excitation-contraction coupling in skeletal, cardiac and smooth muscle.
https://en.wikipedia.org/wiki/The%20Tomorrow%20People
The Tomorrow People is a British children's science fiction television series created by Roger Price. Produced by Thames Television for the ITV Network, the series first ran from 30 April 1973 to 19 February 1979. The theme music was composed by Australian music composer, Dudley Simpson, who composed music for two BBC science fiction dramas, Doctor Who (1963) and Blake’s 7 (1978). In 1992, after having much success with running episodes of the original series in America, Nickelodeon requested Price and Thames Television for a new version to be piloted and filmed at Nickelodeon Studios Florida in April 1992, with Price acting as executive producer. This version used the same basic premise as the original series with some changes, and ran until 8 March 1995. A series of audio plays using the original concept and characters (and many of the original series' actors) was produced by Big Finish Productions between 2001 and 2007. In 2013, an American remake of the show premiered on The CW. Premise All incarnations of the show concerned the emergence of the next stage of human evolution (Homo novis) known colloquially as Tomorrow People. Born to human parents, an apparently normal child might at some point between childhood and late adolescence experience a process called 'breaking out' and develop special paranormal abilities. These abilities include psionic powers such as telepathy, telekinesis, and teleportation. However, their psychological make-up prevents them from intentionally killing others. Original series (1970s) The original series was produced by Thames Television for ITV. The Tomorrow People operate from a secret base, The Lab, built in an abandoned London Underground station. The team constantly watches for new Tomorrow People "breaking out" (usually around the age of puberty) to help them through the process as the youngsters endure mental agonies as their minds suddenly change. They sometimes deal with attention from extraterrestrial species as well as
https://en.wikipedia.org/wiki/Bell%E2%80%93LaPadula%20model
The Bell–LaPadula Model (BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell, and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell, to formalize the U.S. Department of Defense (DoD) multilevel security (MLS) policy. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive (e.g., "Top Secret"), down to the least sensitive (e.g., "Unclassified" or "Public"). The Bell–LaPadula model is an example of a model where there is no clear distinction between protection and security. Features The Bell–LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell–LaPadula model is built on the concept of a state machine with a set of allowable states in a computer system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classi
https://en.wikipedia.org/wiki/Rhesus%20macaque
The rhesus macaque (Macaca mulatta), colloquially rhesus monkey, is a species of Old World monkey. There are between six and nine recognised subspecies that are split between two groups, the Chinese-derived and the Indian-derived. Generally brown or grey in colour, it is in length with a tail and weighs . It is native to South, Central, and Southeast Asia and has the widest geographic range of all non-human primates, occupying a great diversity of altitudes and a great variety of habitats, from grasslands to arid and forested areas, but also close to human settlements. Feral colonies are found in the United States, thought to be either released by humans or escapees after hurricanes destroyed zoo and wildlife park facilities. The rhesus macaque is diurnal, arboreal, and terrestrial. It is mostly herbivorous, mainly eating fruit, but will also consume seeds, roots, buds, bark, and cereals. Studies show almost 100 different plant species in its diet. Rhesus macaques are generalist omnivores, and have a highly varied and flexible diet. With an increase in anthropogenic land changes, rhesus macaques have evolved alongside intense and rapid environmental disturbance associated with human agriculture and urbanization resulting in proportions of their diet to be altered. It will also eat invertebrates, drink water from streams and rivers, and has specialised cheek pouches where it can temporarily store food. Like other macaques, the rhesus macaque is gregarious, with troops comprising 20–200 individuals. The social groups are matrilineal, whereby a female's rank is decided by the rank of her mother. There has been extensive research into female philopatry, common in social animals, as females tend not to leave the social group. The rhesus macaque communicates with a variety of facial expressions, vocalisations, body postures, and gestures. Facial expressions are used to appease or redirect aggression, assert dominance, and threaten other individuals, and vocalisations
https://en.wikipedia.org/wiki/Windrow%20composting
In agriculture, windrow composting is the production of compost by piling organic matter or biodegradable waste, such as animal manure and crop residues, in long rows – windrow. As the process is aerobic, it is also known as Open Windrow Composting (OWC) or Open Air Windrow Composting (OAWC). This method is suited to producing large volumes of compost. These rows are generally turned to improve porosity and oxygen content, mix in or remove moisture, and redistribute cooler and hotter portions of the pile. Windrow composting is a commonly used farm scale composting method. Composting process control parameters include the initial ratios of carbon and nitrogen rich materials, the amount of bulking agent added to assure air porosity, the pile size, moisture content, and turning frequency. The temperature of the windrows must be measured and logged constantly to determine the optimum time to turn them for quicker compost production. Compost windrow turners Compost windrow turners were developed to produce compost on a large scale by Fletcher Sims Jr. of Canyon, Texas. They are traditionally a large machine that straddles a windrow of or more high, by as much as across. Although smaller machines exist for small windrows, most operations use large machines for volume production. Turners drive through the windrow at a slow rate of forward movement. They have a steel drum with paddles that are rapidly turning. As the turner moves through the windrow, fresh air (oxygen) is injected into the compost by the drum/paddle assembly, and waste gases produced by bacterial decomposition are vented. The oxygen feeds the aerobic bacteria and thus speeds the composting process. Utilization To properly use a compost windrow turner, it is ideal to compost on a hard surfaced pad. Heavy-duty compost windrow turners allow the user to obtain optimum results with the aerobic hot composting process. By using four wheel drive or tracks the windrow turner is capable of turning compost in
https://en.wikipedia.org/wiki/Vitali%20set
In mathematics, a Vitali set is an elementary example of a set of real numbers that is not Lebesgue measurable, found by Giuseppe Vitali in 1905. The Vitali theorem is the existence theorem that there are such sets. There are uncountably many Vitali sets, and their existence depends on the axiom of choice. In 1970, Robert Solovay constructed a model of Zermelo–Fraenkel set theory without the axiom of choice where all sets of real numbers are Lebesgue measurable, assuming the existence of an inaccessible cardinal (see Solovay model). Measurable sets Certain sets have a definite 'length' or 'mass'. For instance, the interval [0, 1] is deemed to have length 1; more generally, an interval [a, b], a ≤ b, is deemed to have length b − a. If we think of such intervals as metal rods with uniform density, they likewise have well-defined masses. The set [0, 1] ∪ [2, 3] is composed of two intervals of length one, so we take its total length to be 2. In terms of mass, we have two rods of mass 1, so the total mass is 2. There is a natural question here: if E is an arbitrary subset of the real line, does it have a 'mass' or 'total length'? As an example, we might ask what is the mass of the set of rational numbers between 0 and 1, given that the mass of the interval [0, 1] is 1. The rationals are dense in the reals, so any value between and including 0 and 1 may appear reasonable. However the closest generalization to mass is sigma additivity, which gives rise to the Lebesgue measure. It assigns a measure of b − a to the interval [a, b], but will assign a measure of 0 to the set of rational numbers because it is countable. Any set which has a well-defined Lebesgue measure is said to be "measurable", but the construction of the Lebesgue measure (for instance using Carathéodory's extension theorem) does not make it obvious whether non-measurable sets exist. The answer to that question involves the axiom of choice. Construction and proof A Vitali set is a subset of the interva
https://en.wikipedia.org/wiki/Elcoteq
Elcoteq SE was a Finnish consumer electronics contract manufacturer, EMS, and ODM company headquartered in Luxembourg. The company filed for bankruptcy protection in Luxembourg on October 6, 2011. It was a manufacturer of the BlackBerry and also performed repair and refurbishment services. History Founded in 1984 as a microelectronics unit of the Lohja Corporation, the company became independent in an early 1990s management buyout. Elcoteq made an IPO on the Helsinki Stock Exchange in November 1997. It manufactured the ill-fated Microsoft Kin for Sharp Corporation in the late 2000s. On October 6, 2011, Elcoteq filed for bankruptcy in Luxembourg. The loss of a major client, Nokia, to Asian sourcing outfits may have been a contributing cause. Production base expansion Its original production base was in Lohja, Finland, and in 1992 it established an Estonian base. By 1999 had expanded production to include non-European bases, too. Name Representing electronics, contract manufacturing, and technology, Elcoteq was the company's second choice after Finnish regulators would not allow it to register the name Mikrotec. Clients The first Elcoteq customers were Ericsson and Nokia. Other clients have included: Aastra Ascom EADS Funai Huawei Humax Marconi Electronic Systems Nokia Siemens Networks Philips Research In Motion Sharp Corporation Siemens Sony Ericsson Swissvoice Tellabs Thomson SA Production bases Elcoteq has had production bases in Brazil, China, Estonia, India, Hungary, Romania, and Mexico. References External links Official website Electronics companies of Finland Companies formerly listed on Nasdaq Helsinki
https://en.wikipedia.org/wiki/H-theorem
In classical statistical mechanics, the H-theorem, introduced by Ludwig Boltzmann in 1872, describes the tendency to decrease in the quantity H (defined below) in a nearly-ideal gas of molecules. As this quantity H was meant to represent the entropy of thermodynamics, the H-theorem was an early demonstration of the power of statistical mechanics as it claimed to derive the second law of thermodynamics—a statement about fundamentally irreversible processes—from reversible microscopic mechanics. It is thought to prove the second law of thermodynamics, albeit under the assumption of low-entropy initial conditions. The H-theorem is a natural consequence of the kinetic equation derived by Boltzmann that has come to be known as Boltzmann's equation. The H-theorem has led to considerable discussion about its actual implications, with major themes being: What is entropy? In what sense does Boltzmann's quantity H correspond to the thermodynamic entropy? Are the assumptions (especially the assumption of molecular chaos) behind Boltzmann's equation too strong? When are these assumptions violated? Name and pronunciation Boltzmann in his original publication writes the symbol E (as in entropy) for its statistical function. Years later, Samuel Hawksley Burbury, one of the critics of the theorem, wrote the function with the symbol H, a notation that was subsequently adopted by Boltzmann when referring to his "H-theorem". The notation has led to some confusion regarding the name of the theorem. Even though the statement is usually referred to as the "Aitch theorem", sometimes it is instead called the "Eta theorem", as the capital Greek letter Eta (Η) is undistinguishable from the capital version of Latin letter h (H). Discussions have been raised on how the symbol should be understood, but it remains unclear due to the lack of written sources from the time of the theorem. Studies of the typography and the work of J.W. Gibbs seem to favour the interpretation of H as Eta. Def
https://en.wikipedia.org/wiki/Compressive%20stress
In long, slender structural elements — such as columns or truss bars — an increase of compressive force F leads to structural failure due to buckling at lower stress than the compressive strength. Compressive stress has stress units (force per unit area), usually with negative values to indicate the compaction. However, in geotechnical engineering, compressive stress is represented with positive values. Compressive stress is defined in the same way as the tensile stress but it has negative values so as to express the compression since dL has the opposite direction. ( L is the length of the object.) Compression stress= -( F/A) Where F= Force applied on the object. A= Area of cross section of the object. Materials science
https://en.wikipedia.org/wiki/Skype
Skype () is a proprietary telecommunications application operated by Skype Technologies, a division of Microsoft, best known for VoIP-based videotelephony, videoconferencing and voice calls. It also has instant messaging, file transfer, debit-based calls to landline and mobile telephones (over traditional telephone networks), and other features. Skype is available on various desktop, mobile, and video game console platforms. Skype was created by Niklas Zennström, Janus Friis, and four Estonian developers and first released in August 2003. In September 2005, eBay acquired Skype for $2.6 billion. In September 2009, Silver Lake, Andreessen Horowitz, and the Canada Pension Plan Investment Board bought 65% of Skype for $1.9 billion from eBay, valuing the business at $2.92 billion. In May 2011, Microsoft bought Skype for $8.5 billion and used it to replace their Windows Live Messenger. As of 2011, most of the development team and 44% of all the division's employees were in Tallinn and Tartu, Estonia. Skype originally featured a hybrid peer-to-peer and client–server system. It became entirely powered by Microsoft-operated supernodes in May 2012; in 2017, it changed from a peer-to-peer service to a centralized Azure-based service. As of February 2023, Skype was used by 36 million people each day. Etymology The name for the software is derived from "Sky peer-to-peer", which was then abbreviated to "Skyper". However, some of the domain names associated with "Skyper" were already taken. Dropping the final "r" left the current title "Skype", for which domain names were available. History Skype was founded in 2003 by Niklas Zennström, from Sweden, and Janus Friis, from Denmark. The Skype software was created by Estonians Ahti Heinla, Priit Kasesalu, Jaan Tallinn, and Toivo Annus. Friis and Annus are credited with the idea of reducing the cost of voice calls by using a P2P protocol like that of Kazaa. An early alpha version was created and tested in spring 2003, and the fi
https://en.wikipedia.org/wiki/Onion%20routing
Onion routing is a technique for anonymous communication over a computer network. In an onion network, messages are encapsulated in layers of encryption, analogous to the layers of an onion. The encrypted data is transmitted through a series of network nodes called "onion routers," each of which "peels" away a single layer, revealing the data's next destination. When the final layer is decrypted, the message arrives at its destination. The sender remains anonymous because each intermediary knows only the location of the immediately preceding and following nodes. While onion routing provides a high level of security and anonymity, there are methods to break the anonymity of this technique, such as timing analysis. History Onion routing was developed in the mid-1990s at the U.S. Naval Research Laboratory by employees Paul Syverson, Michael G. Reed, and David Goldschlag to protect U.S. intelligence communications online. It was then refined by the Defense Advanced Research Projects Agency (DARPA) and patented by the Navy in 1998. This method was publicly released by the same employees through publishing an article in the IEEE Journal on Selected Areas in Communications the same year. It depicted the use of the method to protect the user from the network and outside observers who eavesdrop and conduct traffic analysis attacks. The most important part of this research is the configurations and applications of onion routing on the existing e-services, such as Virtual private network, Web-browsing, Email, Remote login, and Electronic cash. Based on the existing onion routing technology, computer scientists Roger Dingledine and Nick Mathewson joined Paul Syverson in 2002 to develop what has become the largest and best-known implementation of onion routing, then called The Onion Routing project (Tor project). After the Naval Research Laboratory released the code for Tor under a free license, Dingledine, Mathewson and five others founded The Tor Project as a non-profit o
https://en.wikipedia.org/wiki/Palindromic%20prime
In mathematics, a palindromic prime (sometimes called a palprime) is a prime number that is also a palindromic number. Palindromicity depends on the base of the number system and its notational conventions, while primality is independent of such concerns. The first few decimal palindromic primes are: 2, 3, 5, 7, 11, 101, 131, 151, 181, 191, 313, 353, 373, 383, 727, 757, 787, 797, 919, 929, … Except for 11, all palindromic primes have an odd number of digits, because the divisibility test for 11 tells us that every palindromic number with an even number of digits is a multiple of 11. It is not known if there are infinitely many palindromic primes in base 10. The largest known is 101888529 - 10944264 - 1. which has 1,888,529 digits, and was found on 18 October 2021 by Ryan Propper and Serge Batalov. On the other hand, it is known that, for any base, almost all palindromic numbers are composite, i.e. the ratio between palindromic composites and all palindromes less than n tends to 1. Other bases In binary, the palindromic primes include the Mersenne primes and the Fermat primes. All binary palindromic primes except binary 11 (decimal 3) have an odd number of digits; those palindromes with an even number of digits are divisible by 3. The sequence of binary palindromic primes begins (in binary): 11, 101, 111, 10001, 11111, 1001001, 1101011, 1111111, 100000001, 100111001, 110111011, ... The palindromic primes in base 12 are: (using A and B for ten and eleven, respectively) 2, 3, 5, 7, B, 11, 111, 131, 141, 171, 181, 1B1, 535, 545, 565, 575, 585, 5B5, 727, 737, 747, 767, 797, B1B, B2B, B6B, ... The palindromic prime numbers can also be generated based on Smarandache function (Kempner function) using prime number algorithm. Property Due to the superstitious significance of the numbers it contains, the palindromic prime 1000000000000066600000000000001 is known as Belphegor's Prime, named after Belphegor, one of the seven princes of Hell. Belphegor's Prime consist
https://en.wikipedia.org/wiki/Zimmermann%E2%80%93Sassaman%20key-signing%20protocol
In cryptography, the Zimmermann–Sassaman key-signing protocol is a protocol to speed up the public key fingerprint verification part of a key signing party. It requires some work before the event. The protocol was invented during a key signing party with Len Sassaman, Werner Koch, Phil Zimmermann, and others. Sassaman-Efficient Before the party The Sassaman-Efficient method is the first of the 2 types developed. Before the event, all participants email the keysigning coordinator their public keys. The coordinator then makes a text file of all the keys and accompanied fingerprint and then hashes it. They then proceed to make the text file and checksum available to all participants. The participants then download the file and check the validity using the hash. Then the participants print out the list and make sure that their own key is correct. During the party Everyone brings their own key list so that they know it is correct and not manipulated. Then the coordinator reads aloud or projects the checksums of the keys. Each participant verifies and states that their key is correct and once that is established a check mark can be put by that key. Once all the keys have been checked then the line folds upon itself and the participants then show each other at least 2 government-issued IDs. Once sufficient verification is established with the authenticity of the person, the other participant puts a second check mark by their name. After the party The participants then fetch the keys from a server or obtain a keyring made for the event. They sign each key on their list with 2 check marks and make sure that the fingerprints match. The signatures are then uploaded to the server or mailed directly to the key owner (if requested). Sassaman-Projected The Sassaman-Projected method is a modified version of the Sassaman-Efficient, with the purpose for large groups. They both follow the same way with the exception of verifying identity. Instead of doing it individually the 2 f
https://en.wikipedia.org/wiki/Key%20signing%20party
In public-key cryptography, a key signing party is an event at which people present their public keys to others in person, who, if they are confident the key actually belongs to the person who claims it, digitally sign the certificate containing that public key and the person's name, etc. Key signing parties are common within the PGP and GNU Privacy Guard community, as the PGP public key infrastructure does not depend on a central key certifying authority, but to a distributed web of trust approach. Key signing parties are a way to strengthen the web of trust. Participants at a key signing party are expected to present adequate identity documents. Although PGP keys are generally used with personal computers for Internet-related applications, key signing parties themselves generally do not involve computers, since that would give adversaries increased opportunities for subterfuge. Rather, participants write down a string of letters and numbers, called a public key fingerprint, which represents their key. The fingerprint is created by a cryptographic hash function, which condenses the public key down to a string which is shorter and more manageable. Participants exchange these fingerprints as they verify each other's identification. Then, after the party, they obtain the public keys corresponding to the fingerprints they received and digitally sign them. See also Zimmermann–Sassaman key-signing protocol Web of trust CryptoParty References External links Pius: Sign entire keyrings and send encrypted emails automatically Keysigning Party Howto Biglumber – Keysigning coordination website Debian wiki: Keysigning – practical guidance from Debian developers Key management OpenPGP
https://en.wikipedia.org/wiki/LCD%20projector
An LCD projector is a type of video projector for displaying video, images or computer data on a screen or other flat surface. It is a modern equivalent of the slide projector or overhead projector. To display images, LCD (liquid-crystal display) projectors typically send light from a metal-halide lamp through a prism or series of dichroic filters that separates light to three polysilicon panelsone each for the red, green and blue components of the video signal. As polarized light passes through the panels (combination of polarizer, LCD panel and analyzer), individual pixels can be opened to allow light to pass or closed to block the light. The combination of open and closed pixels can produce a wide range of colors and shades in the projected image. Metal-halide lamps are used because they output an ideal color temperature and a broad spectrum of color. These lamps also have the ability to produce an extremely large amount of light within a small area; current projectors average about 2,000 to 15,000 American National Standards Institute (ANSI) lumens. Other technologies, such as Digital Light Processing (DLP) and liquid crystal on silicon (LCOS) are also becoming more popular in modestly priced video projection. Projection surfaces Because they use small lamps and the ability to project an image on any flat surface, LCD projectors tend to be smaller and more portable than some other types of projection systems. Even so, the best image quality is found using a blank white, grey, or black (which blocks reflected ambient light) surface, so dedicated projection screens are often used. Perceived color in a projected image is a factor of both projection surface and projector quality. Since white is more of a neutral color, white surfaces are best suited for natural color tones; as such, white projection surfaces are more common in most business and school presentation environments. However, darkest black in a projected image is dependent on how dark the screen is
https://en.wikipedia.org/wiki/SMPTE%20color%20bars
SMPTE color bars are a television test pattern used where the NTSC video standard is utilized, including countries in North America. The Society of Motion Picture and Television Engineers (SMPTE) refers to the pattern as Engineering Guideline (EG) 1-1990. Its components are a known standard, and created by test pattern generators. Comparing it as received to the known standard gives video engineers an indication of how an NTSC video signal has been altered by recording or transmission and what adjustments must be made to bring it back to specification. It is also used for setting a television monitor or receiver to reproduce NTSC chrominance and luminance information correctly. A precursor to the SMPTE test pattern was conceived by Norbert D. Larky (1927–2018) and David D. Holmes (1926–2006) of RCA Laboratories and first published in RCA Licensee Bulletin LB-819 on February 7, 1951. U.S. patent 2,742,525 Color Test Pattern Generator (now expired) was awarded on April 17, 1956, to Larky and Holmes. Later, the EIA published a standard, RS-189A, which in 1976 became EIA-189A, which described a Standard Color Bar Signal, intended for use as a test signal for adjustment of color monitors, adjustment of encoders, and rapid checks of color television transmission systems. In 1977, A. A. Goldberg, of the CBS Technology Center, described an improved color bar test signal developed at the center by Hank Mahler (1936–2021) that was then submitted to the SMPTE TV Video Technology Committee for consideration as a SMPTE recommended practice. This improved test signal was published as the standard SMPTE ECR 1-1978. Its development by CBS was awarded a Technology & Engineering Emmy Award in 2002. CBS did not file a patent application on the test signal, thereby putting it into the public domain for general use by the industry. An extended version of the SMPTE color bars, SMPTE RP 219:2002 was introduced to test HDTV signals (see subsection). Although color bars were originally d
https://en.wikipedia.org/wiki/Christmas%20Bird%20Count
The Christmas Bird Count (CBC) is a census of birds in the Western Hemisphere, performed annually in the early Northern-hemisphere winter by volunteer birdwatchers and administered by the National Audubon Society. The purpose is to provide population data for use in science, especially conservation biology, though many people participate for recreation. The CBC is the longest-running citizen science survey in the world. History Up through the 19th century, many North Americans participated in the tradition of Christmas "side hunts", in which they competed at how many birds they could kill, regardless of whether they had any use for the carcasses and of whether the birds were beneficial, beautiful, or rare. In December 1900, the U.S. ornithologist Frank Chapman, founder of Bird-Lore (which became Audubon magazine), proposed counting birds on Christmas instead of killing them. On Christmas Day of that year, 27 observers took part in the first count in 25 places in the United States and Canada, the count totaling 18,500 individual birds belonging to 90 species. Since then the counts have been held every winter, usually with increasing numbers of observers. For instance, the 101st count, in the winter of 2000–2001, involved 52,471 people in 1,823 places in 17 countries (but mostly in the U.S. and Canada). During the 113th count (winter 2012–2013), 71,531 people participated in 2,369 locations. The National Audubon Society now partners with Bird Studies Canada, the Gulf Coast Bird Observatory of Texas (responsible for CBCs in Mexico), and the Red Nacional de Observadores de Aves (RNOA, National Network of Bird Observers) and the Instituto Alexander von Humboldt of Colombia. The greatest number of bird species ever reported by any U.S. location in a single count is 250, observed on December 19, 2005, in the Matagorda County-Mad Island Marsh count circle around Matagorda and Palacios, Texas. The greatest number of bird species ever reported by a CBC circle in the worl
https://en.wikipedia.org/wiki/Compression%20%28physics%29
it is a force that tends to push or squeeze something together. material in the lower part of structure in the compression. In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration. In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume. Technically, a material is under a state of compression, at some specific point and along a specific direction , if the normal component of the stress vector across a surface with normal direction is directed opposite to . If the stress vector itself is opposite to , the material is said to be under normal compression or pure compressive stress along . In a solid, the amount of compression generally depends on the direction , and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain. The in
https://en.wikipedia.org/wiki/Thermodynamic%20system
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics. A thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy. The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems. Overview Thermodynamic equilibrium is characterized by absence of flow of mass or energy. Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities th
https://en.wikipedia.org/wiki/Brucellosis
Brucellosis is a zoonosis caused by ingestion of unpasteurized milk from infected animals, or close contact with their secretions. It is also known as undulant fever, Malta fever, and Mediterranean fever. The bacteria causing this disease, Brucella, are small, Gram-negative, nonmotile, nonspore-forming, rod-shaped (coccobacilli) bacteria. They function as facultative intracellular parasites, causing chronic disease, which usually persists for life. Four species infect humans: B. abortus, B. canis, B. melitensis, and B. suis. B. abortus is less virulent than B. melitensis and is primarily a disease of cattle. B. canis affects dogs. B. melitensis is the most virulent and invasive species; it usually infects goats and occasionally sheep. B. suis is of intermediate virulence and chiefly infects pigs. Symptoms include profuse sweating and joint and muscle pain. Brucellosis has been recognized in animals and humans since the early 20th century. Signs and symptoms The symptoms are like those associated with many other febrile diseases, but with emphasis on muscular pain and night sweats. The duration of the disease can vary from a few weeks to many months or even years. In the first stage of the disease, bacteremia occurs and leads to the classic triad of undulant fevers, sweating (often with a characteristic foul, moldy smell sometimes likened to wet hay), and migratory arthralgia and myalgia (joint and muscle pain). Blood tests characteristically reveal a low number of white blood cells and red blood cells, show some elevation of liver enzymes such as aspartate aminotransferase and alanine aminotransferase, and demonstrate positive Bengal rose and Huddleston reactions. Gastrointestinal symptoms occur in 70% of cases and include nausea, vomiting, decreased appetite, unintentional weight loss, abdominal pain, constipation, diarrhea, an enlarged liver, liver inflammation, liver abscess, and an enlarged spleen. This complex is, at least in Portugal, Israel, Syria, and J
https://en.wikipedia.org/wiki/Animal%20cognition
Animal cognition encompasses the mental capacities of non-human animals including insect cognition. The study of animal conditioning and learning used in this field was developed from comparative psychology. It has also been strongly influenced by research in ethology, behavioral ecology, and evolutionary psychology; the alternative name cognitive ethology is sometimes used. Many behaviors associated with the term animal intelligence are also subsumed within animal cognition. Researchers have examined animal cognition in mammals (especially primates, cetaceans, elephants, dogs, cats, pigs, horses, cattle, raccoons and rodents), birds (including parrots, fowl, corvids and pigeons), reptiles (lizards, snakes, and turtles), fish and invertebrates (including cephalopods, spiders and insects). Historical background Earliest inferences The mind and behavior of non-human animals has captivated the human imagination for centuries. Many writers, such as Descartes, have speculated about the presence or absence of the animal mind. These speculations led to many observations of animal behavior before modern science and testing were available. This ultimately resulted in the creation of multiple hypotheses about animal intelligence. One of Aesop's Fables was The Crow and the Pitcher, in which a crow drops pebbles into a vessel of water until he is able to drink. This was a relatively accurate reflection of the capability of corvids to understand water displacement. The Roman naturalist Pliny the Elder was the earliest to attest that said story reflects the behavior of real-life corvids. Aristotle, in his biology, hypothesized a causal chain where an animal's sense organs transmitted information to an organ capable of making decisions, and then to a motor organ. Despite Aristotle's cardiocentrism (mistaken belief that cognition occurred in the heart), this approached some modern understandings of information processing. Early inferences were not necessarily precise or ac
https://en.wikipedia.org/wiki/Fa%C3%A0%20di%20Bruno%27s%20formula
Faà di Bruno's formula is an identity in mathematics generalizing the chain rule to higher derivatives. It is named after , although he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast had stated the formula in a calculus textbook, which is considered to be the first published reference on the subject. Perhaps the most well-known form of Faà di Bruno's formula says that where the sum is over all -tuples of nonnegative integers satisfying the constraint Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit: Combining the terms with the same value of and noticing that has to be zero for leads to a somewhat simpler formula expressed in terms of Bell polynomials : Combinatorial form The formula has a "combinatorial" form: where runs through the set of all partitions of the set , "" means the variable runs through the list of all of the "blocks" of the partition , and denotes the cardinality of the set (so that is the number of blocks in the partition and is the size of the block ). Example The following is a concrete explanation of the combinatorial form for the case. The pattern is: The factor corresponds to the partition 2 + 1 + 1 of the integer 4, in the obvious way. The factor that goes with it corresponds to the fact that there are three summands in that partition. The coefficient 6 that goes with those factors corresponds to the fact that there are exactly six partitions of a set of four members that break it into one part of size 2 and two parts of size 1. Similarly, the factor in the third line corresponds to the partition 2 + 2 of the integer 4, (4, because we are finding the fourth derivative), while corresponds to the fact that there are two summands (2 + 2) in that partition. The coefficient 3 corresponds to
https://en.wikipedia.org/wiki/Radioallergosorbent%20test
A radioallergosorbent test (RAST) is a blood test using radioimmunoassay test to detect specific IgE antibodies in order to determine the substances a subject is allergic to. This is different from a skin allergy test, which determines allergy by the reaction of a person's skin to different substances. Medical uses The two most commonly used methods of confirming allergen sensitization are skin testing and allergy blood testing. Both methods are recommended by the NIH guidelines and have similar diagnostic value in terms of sensitivity and specificity. Advantages of the allergy blood test range from: excellent reproducibility across the full measuring range of the calibration curve, it has very high specificity as it binds to allergen specific IgE, and extremely sensitive too, when compared with skin prick testing. In general, this method of blood testing (in-vitro, out of body) vs skin-prick testing (in-vivo, in body) has a major advantage: it is not always necessary to remove the patient from an antihistamine medication regimen, and if the skin conditions (such as eczema) are so widespread that allergy skin testing cannot be done. Allergy blood tests, such as ImmunoCAP, are performed without procedure variations, and the results are of excellent standardization. Adults and children of any age can take an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often more gentle than several skin tests. However, skin testing techniques have improved. Most skin testing does not involve needles and typically skin testing results in minimal patient discomfort. Drawbacks to RAST and ImmunoCAP techniques do exist. Compared to skin testing, ImmunoCAP and other RAST techniques take longer to perform and are less cost effective. Several studies have also found these tests to be less sensitive than skin testing for the detection of clinically relevant allergies. False positive results may be obtained due to cross-reacti
https://en.wikipedia.org/wiki/Classical%20electromagnetism
Classical electromagnetism or classical electrodynamics is a branch of theoretical physics that studies the interactions between electric charges and currents using an extension of the classical Newtonian model; It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics, which is a quantum field theory. Fundamental physical aspects of classical electrodynamics are presented in many texts, such as those by Richard Feynman, Robert B. Leighton and Matthew Sands, David J. Griffiths, Wolfgang K. H. Panofsky and Melba Phillips, and John David Jackson. History The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity. For example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. However, the theory of electromagnetism, as it is currently understood, grew out of Michael Faraday's experiments suggesting the existence of an electromagnetic field and James Clerk Maxwell's use of differential equations to describe it in his A Treatise on Electricity and Magnetism (1873). The development of electromagnetism in Europe included the development of methods to measure voltage, current, capacitance, and resistance. Detailed historical accounts are given by Wolfgang Pauli, E. T. Whittaker, Abraham Pais, and Bruce J. Hunt. Lorentz force The electromagnetic field exerts the following force (often called the Lorentz force) on charged particles: where all boldfaced quantities are vectors: is the force that a particle with charge q experiences, is the electric field at the location of the particle, is the velocity of the particle, is the magnetic field at the location of the particle. Th
https://en.wikipedia.org/wiki/Super%20VGA
Super VGA (SVGA) is a broad term that covers a wide range of computer display standards that extended IBM's VGA specification. When used as shorthand for a resolution, as VGA and XGA often are, SVGA refers to a resolution of 800 × 600. History In the late 1980s, after the release of IBM's VGA, third-party manufacturers began making graphics cards based on its specifications with extended capabilities. As these cards grew in popularity they began to be referred to as "Super VGA." This term was not an official standard, but a shorthand for enhanced VGA cards which had become common by 1988. The first cards that explicitly used the term were Genoa Systems's SuperVGA and SuperVGA HiRes in 1987. Super VGA cards broke compatibility with the IBM VGA standard, requiring software developers to provide specific display drivers and implementations for each card their software could operate on. Initially, the heavy restrictions this placed on software developers slowed the uptake of Super VGA cards, which motivated VESA to produce a unifying standard, the VESA BIOS Extensions (VBE), first introduced in 1989, to provide a common software interface to all cards implementing the VBE specification. Eventually, Super VGA graphics adapters supported innumerable modes. Specifications The Super VGA standardized the following resolutions: 640 × 400 or 640 × 480 with 256 colors 800 × 600 with 24-bit color depth 1024 × 768 with 24-bit color depth 1280 × 1024 with 24-bit color depth SVGA uses the same DE-15 VGA connector as the original standard, and otherwise operates over the same cabling and interfaces as VGA. Early manufacturers Some early Super VGA manufacturers and some of their models, where available: Ahead Technologies (Not related to Nero AG, formerly Ahead Software) Amdek: VGA ADAPTER/132 (Tseng Labs chipset) AST Research, Inc.: VGA Plus (rebranded Paradise) ATI Technologies: VIP (82C451), VGA Wonder Chips and Technologies: 82C451 Cirrus Logic: CL-GD410/4
https://en.wikipedia.org/wiki/Adaptive%20chosen-ciphertext%20attack
An adaptive chosen-ciphertext attack (abbreviated as CCA2) is an interactive form of chosen-ciphertext attack in which an attacker first sends a number of ciphertexts to be decrypted chosen adaptively, and then uses the results to distinguish a target ciphertext without consulting the oracle on the challenge ciphertext. In an adaptive attack, the attacker is further allowed adaptive queries to be asked after the target is revealed (but the target query is disallowed). It is extending the indifferent (non-adaptive) chosen-ciphertext attack (CCA1) where the second stage of adaptive queries is not allowed. Charles Rackoff and Dan Simon defined CCA2 and suggested a system building on the non-adaptive CCA1 definition and system of Moni Naor and Moti Yung (which was the first treatment of chosen ciphertext attack immunity of public key systems). In certain practical settings, the goal of this attack is to gradually reveal information about an encrypted message, or about the decryption key itself. For public-key systems, adaptive-chosen-ciphertexts are generally applicable only when they have the property of ciphertext malleability — that is, a ciphertext can be modified in specific ways that will have a predictable effect on the decryption of that message. Practical attacks Adaptive-chosen-ciphertext attacks were perhaps considered to be a theoretical concern, but not to have been be manifested in practice, until 1998, when Daniel Bleichenbacher (then of Bell Laboratories) demonstrated a practical attack against systems using RSA encryption in concert with the PKCS#1 v1 encoding function, including a version of the Secure Sockets Layer (SSL) protocol used by thousands of web servers at the time. The Bleichenbacher attacks, also known as the million message attack, took advantage of flaws within the PKCS #1 function to gradually reveal the content of an RSA encrypted message. Doing this requires sending several million test ciphertexts to the decryption device (e.g. SSL
https://en.wikipedia.org/wiki/Laser%20turntable
A laser turntable (or optical turntable) is a phonograph that plays standard LP records (and other gramophone records) using laser beams as the pickup instead of using a stylus as in conventional turntables. Although these turntables use laser pickups, the same as Compact Disc players, the signal remains in the analog realm and is never digitized. History William K. Heine presented a paper "A Laser Scanning Phonograph Record Player" to the 57th Audio Engineering Society (AES) convention in May 1977. The paper details a method developed by Heine that employs a single 2.2 mW helium–neon laser for both tracking a record groove and reproducing the stereo audio of a phonograph in real time. In development since 1972, the working prototype was named the "LASERPHONE", and the methods it used for playback was awarded U.S. Patent 3,992,593 on 16 November 1976. Heine concluded in his paper that he hoped his work would increase interest in using lasers for phonographic playback. Finial Four years later in 1981 Robert S. Reis, a graduate student in engineering at Stanford University, wrote his master's thesis on "An Optical Turntable". In 1983 he and fellow Stanford electrical engineer Robert E. Stoddard founded Finial Technology to develop and market a laser turntable, raising $7 million in venture capital. In 1984 servo-control expert Robert N. Stark joined the effort. A non-functioning mock-up of the proposed Finial turntable was shown at the 1984 Consumer Electronics Show (CES), generating much interest and a fair amount of mystery, since the patents had not yet been granted and the details had to be kept secret. The first working model, the Finial LT-1 (Laser Turntable-1), was completed in time for the 1986 CES. The prototype revealed an interesting flaw of laser turntables: they are so accurate that they "play" every particle of dirt and dust on the record, instead of pushing them aside as a conventional stylus would. The non-contact laser pickup does have the advanta
https://en.wikipedia.org/wiki/Radar%20speed%20gun
A radar speed gun, also known as a radar gun, speed gun, or speed trap gun, is a device used to measure the speed of moving objects. It is commonly used by police to check the speed of moving vehicles while conducting traffic enforcement, and in professional sports to measure speeds such as those of baseball pitches, tennis serves, and cricket bowls. A radar speed gun is a Doppler radar unit that may be handheld, vehicle-mounted, or static. It measures the speed of the objects at which it is pointed by detecting a change in frequency of the returned radar signal caused by the Doppler effect, whereby the frequency of the returned signal is increased in proportion to the object's speed of approach if the object is approaching, and lowered if the object is receding. Such devices are frequently used for speed limit enforcement, although more modern LIDAR speed gun instruments, which use pulsed laser light instead of radar, began to replace radar guns during the first decade of the twenty-first century, because of limitations associated with small radar systems. History The radar speed gun was invented by John L. Barker Sr., and Ben Midlock, who developed radar for the military while working for the Automatic Signal Company (later Automatic Signal Division of LFE Corporation) in Norwalk, Connecticut during World War II. Originally, Automatic Signal was approached by Grumman to solve the specific problem of terrestrial landing gear damage on the Consolidated PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators. The unit was installed at the end of the runway at Grumman's Bethpage, New York facility, and aimed directly upward to measure the sink rate of landing PBYs. After the war, Barker and Midlock tested radar on the Merritt Parkway. In 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut, initially for traffic surveys and issuing warnings to dri
https://en.wikipedia.org/wiki/Graph%20coloring
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a plane graph is just a vertex coloring of its dual. However, non-vertex coloring problems are often stated and studied as-is. This is partly pedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. The convention of using colors originates from coloring the countries of a map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use any finite set as the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in t
https://en.wikipedia.org/wiki/Actor%20%28UML%29
An actor in the Unified Modeling Language (UML) "specifies a role played by a user or any other system that interacts with the subject." "An Actor models a type of role played by an entity that interacts with the subject (e.g., by exchanging signals and data), but which is external to the subject." "Actors may represent roles played by human users, external hardware, or other subjects. Actors do not necessarily represent specific physical entities but merely particular facets (i.e., “roles”) of some entities that are relevant to the specification of its associated use cases. A single physical instance may play the role of several different actors and a given actor may be played by multiple different instances." UML 2 does not permit associations between Actors. The use of generalization/specialization relationship between actors is useful in modeling overlapping behaviours between actors and does not violate this constraint since a generalization relation is not a type of association. Actors interact with use cases. References External links Illustration of actors in UML Actor in UML 2 Unified Modeling Language
https://en.wikipedia.org/wiki/Adder%20%28electronics%29
An adder, or summer, is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units (ALUs). They are also used in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators and similar operations. Although adders can be constructed for many number representations, such as binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require more logic around the basic adder. History In 1937, Claude Shannon demonstrated binary addition in his graduate thesis at MIT. Binary adders Half adder The half adder adds two single binary digits and . It has two outputs, sum () and carry (). The carry signal represents an overflow into the next digit of a multi-digit addition. The value of the sum is . The simplest half-adder design, pictured on the right, incorporates an XOR gate for and an AND gate for . The Boolean logic for the sum (in this case ) will be whereas for the carry () will be . With the addition of an OR gate to combine their carry outputs, two half adders can be combined to make a full adder. The half adder adds two input bits and generates a carry and sum, which are the two outputs of a half adder. The input variables of a half adder are called the augend and addend bits. The output variables are the sum and carry. The truth table for the half adder is: {| class="wikitable" style="text-align:center" |- ! colspan="2"| Inputs || colspan="2"| Outputs |- style="background:#def; text-align:center;" | A || B || Cout || S |- style="background:#dfd; text-align:center;" | 0 || 0 || 0 || 0 |- style="background:#dfd; text-align:center;" | 0 || 1 || 0 || 1 |- style="background:#dfd; text-align:center;"
https://en.wikipedia.org/wiki/Mutual%20information
In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" (in units such as shannons (bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like the correlation coefficient, MI is more general and determines how different the joint distribution of the pair is from the product of the marginal distributions of and . MI is the expected value of the pointwise mutual information (PMI). The quantity was defined and analyzed by Claude Shannon in his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later by Robert Fano. Mutual Information is also known as information gain. Definition Let be a pair of random variables with values over the space . If their joint distribution is and the marginal distributions are and , the mutual information is defined as where is the Kullback–Leibler divergence, and is the outer product distribution which assigns probability to each . Notice, as per property of the Kullback–Leibler divergence, that is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when and are independent (and hence observing tells you nothing about ). is non-negative, it is a measure of the price for encoding as a pair of independent random variables when in reality they are not. If the natural logarithm is used, the unit of mutual information is the nat. If the log base 2 is used, the unit of mutual information is the shannon, also k
https://en.wikipedia.org/wiki/Service%20Location%20Protocol
The Service Location Protocol (SLP, srvloc) is a service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. SLP has been designed to scale from small, unmanaged networks to large enterprise networks. It has been defined in RFC 2608 and RFC 3224 as standards track document. Overview SLP is used by devices to announce services on a local network. Each service must have a URL that is used to locate the service. Additionally it may have an unlimited number of name/value pairs, called attributes. Each device must always be in one or more scopes. Scopes are simple strings and are used to group services, comparable to the network neighborhood in other systems. A device cannot see services that are in different scopes. The URL of a printer could look like: service:printer:lpr://myprinter/myqueue This URL describes a queue called "myqueue" on a printer with the host name "myprinter". The protocol used by the printer is LPR. Note that a special URL scheme "service:" is used by the printer. "service:" URLs are not required: any URL scheme can be used, but they allow you to search for all services of the same type (e.g. all printers) regardless of the protocol that they use. The first three components of the "service:" URL type ("service:printer:lpr") are also called service type. The first two components ("service:printer") are called abstract service type. In a non-"service:" URL the schema name is the service type (for instance "http" in "http://www.wikipedia.org"). The attributes of the printer could look like: (printer-name=Hugo), (printer-natural-language-configured=en-us), (printer-location=In my home office), (printer-document-format-supported=application/postscript), (printer-color-supported=false), (printer-compression-supported=deflate, gzip) The example uses the standard syntax for attributes in SLP, only newlines have been added to improve readability. The definition of a "
https://en.wikipedia.org/wiki/List%20of%20transforms
This is a list of transforms in mathematics. Integral transforms Abel transform Bateman transform Fourier transform Short-time Fourier transform Gabor transform Hankel transform Hartley transform Hermite transform Hilbert transform Hilbert–Schmidt integral operator Jacobi transform Laguerre transform Laplace transform Inverse Laplace transform Two-sided Laplace transform Inverse two-sided Laplace transform Laplace–Carson transform Laplace–Stieltjes transform Legendre transform Linear canonical transform Mellin transform Inverse Mellin transform Poisson–Mellin–Newton cycle N-transform Radon transform Stieltjes transformation Sumudu transform Wavelet transform (integral) Weierstrass transform Hussein Jassim Transform Discrete transforms Binomial transform Discrete Fourier transform, DFT Fast Fourier transform, a popular implementation of the DFT Discrete cosine transform Modified discrete cosine transform Discrete Hartley transform Discrete sine transform Discrete wavelet transform Hadamard transform (or, Walsh–Hadamard transform) Fast wavelet transform Hankel transform, the determinant of the Hankel matrix Discrete Chebyshev transform Equivalent, up to a diagonal scaling, to a discrete cosine transform Finite Legendre transform Spherical Harmonic transform Irrational base discrete weighted transform Number-theoretic transform Stirling transform Discrete-time transforms These transforms have a continuous frequency domain: Discrete-time Fourier transform Z-transform Data-dependent transforms Karhunen–Loève transform Other transforms Affine transformation (computer graphics) Bäcklund transform Bilinear transform Box–Muller transform Burrows–Wheeler transform (data compression) Chirplet transform Distance transform Fractal transform Gelfand transform Hadamard transform Hough transform (digital image processing) Inverse scattering transform Legendre transformation Möbius transformation Perspective transform (computer graphics) Sequence transform Watershed transform (
https://en.wikipedia.org/wiki/MEPIS
MEPIS was a set of Linux distributions, distributed as Live CDs or DVDs that could be installed onto a hard disk drive. MEPIS was started by Warren Woodford and MEPIS LLC. The most popular MEPIS distribution was SimplyMEPIS, which was based primarily on Debian stable, with the last version of SimplyMEPIS being based on Debian 6. It could either be installed onto a hard drive or used as a Live DVD, which made it externally bootable for troubleshooting and repairing many operating systems. It included the KDE desktop environment. History MEPIS was designed as an alternative to SUSE Linux, Red Hat Linux, and Mandriva Linux (formerly Mandrake) which Woodford considered too difficult for the average user. MEPIS's first official release was on May 10, 2003. In 2006, MEPIS made a transition from using Debian packages to using Ubuntu packages. SimplyMEPIS 6.0, released in July 2006, was the first version of MEPIS to incorporate the Ubuntu packages and repositories. SimplyMEPIS 7.0 discontinued the use of Ubuntu binary packages in favor of a combination of MEPIS packaged binaries based on Debian and Ubuntu source code, combined with a Debian stable OS core and extra packages from Debian package pools. Major releases occurred about six months to one year apart until 2013, based mostly on Warren's availability to produce the next version. Variants SimplyMEPIS, designed for everyday desktop and laptop computing. The default desktop environment is KDE-based, although Gnome and/or other GUI-environments can be installed. SimplyMEPIS 11.0 is based on Debian 6 and includes Linux 2.6.36.4, KDE 4.5.1 and LibreOffice 3.3.2, with other applications available from Debian and the MEPIS Community. It was released on May 5, 2011. Development halted during beta testing of Mepis 12. antiX, a fast and lightweight distribution, was originally based on MEPIS for x86 systems in an environment suitable for old computers. It's now based on Debian Stable. MX Linux, a midweight distribution
https://en.wikipedia.org/wiki/Parabolic%20coordinates
Parabolic coordinates are a two-dimensional orthogonal coordinate system in which the coordinate lines are confocal parabolas. A three-dimensional version of parabolic coordinates is obtained by rotating the two-dimensional system about the symmetry axis of the parabolas. Parabolic coordinates have found many applications, e.g., the treatment of the Stark effect and the potential theory of the edges. Two-dimensional parabolic coordinates Two-dimensional parabolic coordinates are defined by the equations, in terms of Cartesian coordinates: The curves of constant form confocal parabolae that open upwards (i.e., towards ), whereas the curves of constant form confocal parabolae that open downwards (i.e., towards ). The foci of all these parabolae are located at the origin. The Cartesian coordinates and can be converted to parabolic coordinates by: Two-dimensional scale factors The scale factors for the parabolic coordinates are equal Hence, the infinitesimal element of area is and the Laplacian equals Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Three-dimensional parabolic coordinates The two-dimensional parabolic coordinates form the basis for two sets of three-dimensional orthogonal coordinates. The parabolic cylindrical coordinates are produced by projecting in the -direction. Rotation about the symmetry axis of the parabolae produces a set of confocal paraboloids, the coordinate system of tridimensional parabolic coordinates. Expressed in terms of cartesian coordinates: where the parabolae are now aligned with the -axis, about which the rotation was carried out. Hence, the azimuthal angle is defined The surfaces of constant form confocal paraboloids that open upwards (i.e., towards ) whereas the surfaces of constant form confocal paraboloids that open downwards (i.e., towards ). The foci of all these pa
https://en.wikipedia.org/wiki/Blind%20signature
In cryptography a blind signature, as introduced by David Chaum, is a form of digital signature in which the content of a message is disguised (blinded) before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems and digital cash schemes. An often-used analogy to the cryptographic blind signature is the physical act of a voter enclosing a completed anonymous ballot in a special carbon paper lined envelope that has the voter's credentials pre-printed on the outside. An official verifies the credentials and signs the envelope, thereby transferring his signature to the ballot inside via the carbon paper. Once signed, the package is given back to the voter, who transfers the now signed ballot to a new unmarked normal envelope. Thus, the signer does not view the message content, but a third party can later verify the signature and know that the signature is valid within the limitations of the underlying signature scheme. Blind signatures can also be used to provide unlinkability, which prevents the signer from linking the blinded message it signs to a later un-blinded version that it may be called upon to verify. In this case, the signer's response is first "un-blinded" prior to verification in such a way that the signature remains valid for the un-blinded message. This can be useful in schemes where anonymity is required. Blind signature schemes can be implemented using a number of common public key signing schemes, for instance RSA and DSA. To perform such a signature, the message is first "blinded", typically by combining it in some way with a random "blinding factor". The blinded message is passed to a signer, who then signs it using a standard signing algorithm. The resulting message, along
https://en.wikipedia.org/wiki/Inline%20function
In the C and C++ programming languages, an inline function is one qualified with the keyword inline; this serves two purposes: It serves as a compiler directive that suggests (but does not require) that the compiler substitute the body of the function inline by performing inline expansion, i.e. by inserting the function code at the address of each function call, thereby saving the overhead of a function call. In this respect it is analogous to the register storage class specifier, which similarly provides an optimization hint. The second purpose of inline is to change linkage behavior; the details of this are complicated. This is necessary due to the C/C++ separate compilation + linkage model, specifically because the definition (body) of the function must be duplicated in all translation units where it is used, to allow inlining during compiling, which, if the function has external linkage, causes a collision during linking (it violates uniqueness of external symbols). C and C++ (and dialects such as GNU C and Visual C++) resolve this in different ways. Example An inline function can be written in C or C++ like this: inline void swap(int *m, int *n) { int tmp = *m; *m = *n; *n = tmp; } Then, a statement such as the following: swap(&x, &y); may be translated into (if the compiler decides to do the inlining, which typically requires optimization to be enabled): int tmp = x; x = y; y = tmp; When implementing a sorting algorithm doing lots of swaps, this can increase the execution speed. Standard support C++ and C99, but not its predecessors K&R C and C89, have support for inline functions, though with different semantics. In both cases, inline does not force inlining; the compiler is free to choose not to inline the function at all, or only in some cases. Different compilers vary in how complex a function they can manage to inline. Mainstream C++ compilers like Microsoft Visual C++ and GCC support an option that lets the compilers automatically inlin
https://en.wikipedia.org/wiki/Memory%20debugger
A memory debugger is a debugger for finding software memory problems such as memory leaks and buffer overflows. These are due to bugs related to the allocation and deallocation of dynamic memory. Programs written in languages that have garbage collection, such as managed code, might also need memory debuggers, e.g. for memory leaks due to "living" references in collections. Overview Memory debuggers work by monitoring memory access, allocations, and deallocation of memory. Many memory debuggers require applications to be recompiled with special dynamic memory allocation libraries, whose APIs are mostly compatible with conventional dynamic memory allocation libraries, or else use dynamic linking. Electric Fence is such a debugger which debugs memory allocation with malloc. Some memory debuggers (e.g. Valgrind) work by running the executable in a virtual machine-like environment, monitoring memory access, allocation and deallocation so that no recompilation with special memory allocation libraries is required. Finding memory issues such as leaks can be extremely time consuming as they may not manifest themselves except under certain conditions. Using a tool to detect memory misuse makes the process much faster and easier. As abnormally high memory utilization can be a contributing factor in software aging, memory debuggers can help programmers to avoid software anomalies that would exhaust the computer system memory, thus ensuring high reliability of the software even for long runtimes. Comparison to static analyzer Some static analysis tools can also help find memory errors. Memory debuggers operate as part of an application while it's running while static code analysis is performed by analyzing the code without executing it. These different techniques will typically find different instances of problems, and using them both together yields the best result. List of memory debugging tools This is a list of tools useful for memory debugging. A profiler can be used
https://en.wikipedia.org/wiki/Valgrind
Valgrind () is a programming tool for memory debugging, memory leak detection, and profiling. Valgrind was originally designed to be a freely licensed memory debugging tool for Linux on x86, but has since evolved to become a generic framework for creating dynamic analysis tools such as checkers and profilers. Overview Valgrind is in essence a virtual machine using just-in-time compilation techniques, including dynamic recompilation. Nothing from the original program ever gets run directly on the host processor. Instead, Valgrind first translates the program into a temporary, simpler form called intermediate representation (IR), which is a processor-neutral, static single assignment form-based form. After the conversion, a tool (see below) is free to do whatever transformations it would like on the IR, before Valgrind translates the IR back into machine code and lets the host processor run it. Valgrind recompiles binary code to run on host and target (or simulated) CPUs of the same architecture. It also includes a GDB stub to allow debugging of the target program as it runs in Valgrind, with "monitor commands" that allow querying the Valgrind tool for various information. A considerable amount of performance is lost in these transformations (and usually, the code the tool inserts); usually, code run with Valgrind and the "none" tool (which does nothing to the IR) runs at 20% to 25% of the speed of the normal program. Tools Memcheck There are multiple tools included with Valgrind (and several external ones). The default (and most used) tool is Memcheck. Memcheck inserts extra instrumentation code around almost all instructions, which keeps track of the validity (all unallocated memory starts as invalid or "undefined", until it is initialized into a deterministic state, possibly from other memory) and addressability (whether the memory address in question points to an allocated, non-freed memory block), stored in the so-called V bits and A bits respectively. As da
https://en.wikipedia.org/wiki/Splint%20%28programming%20tool%29
Splint, short for Secure Programming Lint, is a programming tool for statically checking C programs for security vulnerabilities and coding mistakes. Formerly called LCLint, it is a modern version of the Unix lint tool. Splint has the ability to interpret special annotations to the source code, which gives it stronger checking than is possible just by looking at the source alone. Splint is used by gpsd as part of an effort to design for zero defects. Splint is free software released under the terms of the GNU General Public License. Main development activity on Splint stopped in 2010. According to the CVS at SourceForge, as of September 2012 the most recent change in the repository was in November 2010. A Git repository at GitHub has more recent changes, starting in July 2019. Example #include <stdio.h> int main() { char c; while (c != 'x'); { c = getchar(); if (c = 'x') return 0; switch (c) { case '\n': case '\r': printf("Newline\n"); default: printf("%c",c); } } return 0; } Splint's output: <nowiki> Variable c used before definition Suspected infinite loop. No value used in loop test (c) is modified by test or loop body. Assignment of int to char: c = getchar() Test expression for if is assignment expression: c = 'x' Test expression for if not boolean, type char: c = 'x' Fall through case (no preceding break) </nowiki> Fixed source: #include <stdio.h> int main() { int c = 0; // Added an initial assignment definition. while (c != 'x') { c = getchar(); // Corrected type of c to int if (c == 'x') // Fixed the assignment error to make it a comparison operator. return 0; switch (c) { case '\n': case '\r': printf("Newline\n"); break; // Added break statement to prevent fall-through. default: printf("%c",c); break; //Added break stateme
https://en.wikipedia.org/wiki/Catallactics
Catallactics is a theory of the way the free market system reaches exchange ratios and prices. It aims to analyse all actions based on monetary calculation and trace the formation of prices back to the point where an agent makes his or her choices. It explains prices as they are, rather than as they "should" be. The laws of catallactics are not value judgments, but aim to be exact, empirical, and of universal validity. It was used extensively by the Austrian School economist Ludwig von Mises. Etymology The term catallactics or catallaxy, respectively, comes from the Greek verb which means to exchange, to reconcile. Definition Catallactics is a praxeological theory. The term catallaxy was used by Friedrich Hayek to describe "the order brought about by the mutual adjustment of many individual economies in a market." Hayek was dissatisfied with the usage of the word "economy" because its Greek root, which translates as "household management", implies that economic agents in a market economy possess shared goals. He derived the word "Catallaxy" (Hayek's suggested Greek construction would be rendered καταλλαξία) from the Greek verb katallasso (καταλλάσσω) which meant not only "to exchange" but also "to admit in the community" and "to change from enemy into friend." According to Mises and Hayek it was Richard Whately who coined the term "catallactics". Whately's Introductory Lectures on Political Economy (1831) reads: See also Price signal Catallaxy Notes Bibliography External links Austrian School Friedrich Hayek Self-organization
https://en.wikipedia.org/wiki/Set-theoretic%20definition%20of%20natural%20numbers
In set theory, several ways have been proposed to construct the natural numbers. These include the representation via von Neumann ordinals, commonly employed in axiomatic set theory, and a system based on equinumerosity that was proposed by Gottlob Frege and by Bertrand Russell. Definition as von Neumann ordinals In Zermelo–Fraenkel (ZF) set theory, the natural numbers are defined recursively by letting be the empty set and for each n. In this way for each natural number n. This definition has the property that n is a set with n elements. The first few numbers defined this way are: The set N of natural numbers is defined in this system as the smallest set containing 0 and closed under the successor function S defined by . The structure is a model of the Peano axioms . The existence of the set N is equivalent to the axiom of infinity in ZF set theory. The set N and its elements, when constructed this way, are an initial part of the von Neumann ordinals. Ravven and Quine refer to these sets as "counter sets". Frege and Russell Gottlob Frege and Bertrand Russell each proposed defining a natural number n as the collection of all sets with n elements. More formally, a natural number is an equivalence class of finite sets under the equivalence relation of equinumerosity. This definition may appear circular, but it is not, because equinumerosity can be defined in alternate ways, for instance by saying that two sets are equinumerous if they can be put into one-to-one correspondence—this is sometimes known as Hume's principle. This definition works in type theory, and in set theories that grew out of type theory, such as New Foundations and related systems. However, it does not work in the axiomatic set theory ZFC nor in certain related systems, because in such systems the equivalence classes under equinumerosity are proper classes rather than sets. For enabling natural numbers to form a set, equinumerous classes are replaced by special sets, named cardinal
https://en.wikipedia.org/wiki/Moor%27s%20head
A Moor's head, since the 11th century, is a symbol depicting the head of a black moor. Origin The precise origin of the Moor's head is a subject of controversy. The most likely explanation is that it is derived from the heraldic war flag of the Reconquista depicting the Cross of Alcoraz, symbolizing Peter I of Aragon and Pamplona's victory over the "Moorish" kings of the Taifa of Zaragoza in the Battle of Alcoraz in 1096. The blindfold may originally have been a headband. Another theory claims that it is the Nubian Saint Maurice (3rd century AD). The earliest heraldic use of the Moor's head is first recorded in 1281, during the reign of Peter III of Aragon and represents the Cross of Alcoraz, which the King adopted as his personal coat of arms. The Crown of Aragon had for a long time governed Sardinia and Corsica, having been granted the islands by the Pope, although they never really exercised formal control. The Moor's head became a symbol of the islands. Flags, seals, and emblems This symbol is used in heraldry, vexillography, and political imagery. Flag of Corsica The main charge in the coat of arms in Corsica is a , Corsican for "The Moor". An early version is attested in the 14th-century Gelre Armorial, where an unblindfolded Moor's head represents Corsica as a territory of the Crown of Aragon. Interestingly, the Moor's head is attached to his shoulders and upper body, and he is alive and smiling. In 1736, it was used by both sides during the struggle for independence. In 1760, General Pasquale Paoli ordered the necklace to be removed from the head and the blindfold raised. His reason, reported by his biographers, was "" () The blindfold was thereafter changed to a headband. The current flag of Corsica is the , is male rather than female, and has a regular knot at the back of the head. SC Bastia The Moor's head appears on the logo for the Corsican football team SC Bastia, who play in the French football system's Ligue 2. Flag of Sardinia The fla
https://en.wikipedia.org/wiki/Frontend%20and%20backend
In software engineering, the terms frontend and backend (sometimes written as back end or back-end) refer to the separation of concerns between the presentation layer (frontend), and the data access layer (backend) of a piece of software, or the physical infrastructure or hardware. In the client–server model, the client is usually considered the frontend and the server is usually considered the backend, even when some presentation work is actually done on the server itself. Introduction In software architecture, there may be many layers between the hardware and end user. The front is an abstraction, simplifying the underlying component by providing a user-friendly interface, while the back usually handles data storage and business logic. In telecommunication, the front can be considered a device or service, while the back is the infrastructure that supports provision of service. A rule of thumb is that the client-side (or "frontend") is any component manipulated by the user. The server-side (or "backend") code usually resides on the server, often far removed physically from the user. Software definitions In content management systems, the terms frontend and backend may refer to the end-user facing views of the CMS and the administrative views, respectively. In speech synthesis, the frontend refers to the part of the synthesis system that converts the input text into a symbolic phonetic representation, and the backend converts the symbolic phonetic representation into actual sounds. In compilers, the frontend translates a computer programming source code into an intermediate representation, and the backend works with the intermediate representation to produce code in a computer output language. The backend usually optimizes to produce code that runs faster. The frontend/backend distinction can separate the parser section that deals with source code and the backend that generates code and optimizes. Some designs, such as GCC, offer choices between multiple fr
https://en.wikipedia.org/wiki/UNIVAC%20Solid%20State
The UNIVAC Solid State was a magnetic drum-based solid-state computer announced by Sperry Rand in December 1958 as a response to the IBM 650. It was one of the first computers offered for sale to be (nearly) entirely solid-state, using 700 transistors, and 3000 magnetic amplifiers (FERRACTOR) for primary logic, and 20 vacuum tubes largely for power control. It came in two versions, the Solid State 80 (IBM-style 80-column cards) and the Solid State 90 (Remington-Rand 90-column cards). In addition to the "80/90" designation, there were two variants of the Solid State the SS I 80/90 and the SS II 80/90. The SS II series included two enhancements the addition of 1,280 words of core memory and support for magnetic tape drives. The SS I had only the standard 5,000-word drum memory described in this article and no tape drives. The memory drum had a regular access speed AREA and a FAST ACCESS AREA. 4,000 words of memory had one set of R/W heads to access. The programmer was required to keep track of what words of memory where under the R/W heads and available to be read or written. At worst the program would have to wait for a full revolution of the drum to access the required memory locations. However 1,000 words of memory had 4 sets of R/W heads requiring only a 90 degree turn of the drum to access the required words. Programming required that any function that changed the contents of a memory location had first to transfer the contents of the affected word from the drum to a static register. There were 3 of these registers A X L, to add the values contained in drum memory locations the programmer would transfer the contents of the specific drum location to register A, then the second operand would be copied to the X register. The ADD INSTRUCTION WOULD BE EXECUTED leaving the result in the X register. The contents of the X register would then be written back to the appropriate word on the drum. Both variants included a card reader, a card punch, and the line print
https://en.wikipedia.org/wiki/Edge%20of%20chaos
The edge of chaos is a transition space between order and disorder that is hypothesized to exist within a wide variety of systems. This transition zone is a region of bounded instability that engenders a constant dynamic interplay between order and disorder. Even though the idea of the edge of chaos is an abstract one, it has many applications in such fields as ecology, business management, psychology, political science, and other domains of the social sciences. Physicists have shown that adaptation to the edge of chaos occurs in almost all systems with feedback. History The phrase edge of chaos was coined in the late 1980s by chaos theory physicist Norman Packard. In the next decade, Packard and mathematician Doyne Farmer co-authored many papers on understanding how self-organization and order emerges at the edge of chaos. One of the original catalysts that led to the idea of the edge of chaos were the experiments with cellular automata done by computer scientist Christopher Langton where a transition phenomenon was discovered. The phrase refers to an area in the range of a variable, λ (lambda), which was varied while examining the behaviour of a cellular automaton (CA). As λ varied, the behaviour of the CA went through a phase transition of behaviours. Langton found a small area conducive to produce CAs capable of universal computation. At around the same time physicist James P. Crutchfield and others used the phrase onset of chaos to describe more or less the same concept. In the sciences in general, the phrase has come to refer to a metaphor that some physical, biological, economic and social systems operate in a region between order and either complete randomness or chaos, where the complexity is maximal. The generality and significance of the idea, however, has since been called into question by Melanie Mitchell and others. The phrase has also been borrowed by the business community and is sometimes used inappropriately and in contexts that are far from
https://en.wikipedia.org/wiki/GLORIAD
GLORIAD (Global Ring Network for Advanced Application Development) is a high-speed computer network used to connect scientific organizations in Russia, China, United States, the Netherlands, Korea and Canada. India, Singapore, Vietnam, and Egypt were added in 2009. GLORIAD is sponsored by the US National Science Foundation, a consortium of science organizations and Ministries in Russia, the Chinese Academy of Sciences, the Ministry of Science and Technology of Korea, the Canadian CANARIE network, the national research network in The Netherlands SURFnet and has some telecommunications services donated by Tyco Telecommunications. GLORIAD provides bandwidth of up to 10 Gbit/s via OC-192 links, e.g. between KRLight in Korea and the Pacific NorthWest GigaPOP in the United States. The previous version of the network, "Little GLORIAD", was completed in mid-2004, and it connected Chicago, Hong Kong, Beijing, Novosibirsk, Moscow, Amsterdam and Chicago again. For this network, a direct computer link was drawn between Russia and China for the first time in history. References National Science Foundation (2003): United States, Russia, China link up first Global-Ring Network for advanced science and education cooperation. Retrieved January 13, 2004 from https://www.sciencedaily.com/releases/2004/01/040102092834.htm Paul, J. (2003): New network to link U.S., Russia, China. Retrieved January 13, 2004 from http://apnews.excite.com/article/20031223/D7VK5LOG2.html External links GLORIAD Academic computer network organizations
https://en.wikipedia.org/wiki/Probability%20amplitude
In quantum mechanics, a probability amplitude is a complex number used for describing the behaviour of systems. The modulus squared of this quantity represents a probability density. Probability amplitudes provide a relationship between the quantum state vector of a system and the results of observations of that system, a link was first proposed by Max Born, in 1926. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding, and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements, were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger and Einstein. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today. Overview Physical Neglecting some technical complexities, the problem of quantum measurement is the behaviour of a quantum state, for which the value of the observable to be measured is uncertain. Such a state is thought to be a coherent superposition of the observable's eigenstates, states on which the value of the observable is uniquely defined, for different possible values of the observable. When a measurement of is made, the system (under the Copenhagen interpretation) jumps to one of the eigenstates, returning the eigenvalue belonging to that eigenstate. The system may always be described by a linear combination or superposition of these eigenstates with unequal "weights". Intuitively it is
https://en.wikipedia.org/wiki/Buffer%20amplifier
A buffer amplifier (sometimes simply called a buffer) is one that provides electrical impedance transformation from one circuit to another, with the aim of preventing the signal source from being affected by whatever currents (or voltages, for a current buffer) that the load may impose. The signal is 'buffered from' load currents. Two main types of buffer exist: the voltage buffer and the current buffer. Voltage buffer A voltage buffer amplifier is used to transfer a voltage from a first circuit, having a high output impedance level, to a second circuit with a low input impedance level. The interposed buffer amplifier prevents the second circuit from loading the first circuit unacceptably and interfering with its desired operation, since without the voltage buffer the voltage of the second circuit is influenced by output impedance of the first circuit (as it is larger than the input impedance of the second circuit). In the ideal voltage buffer in the diagram, the input resistance is infinite and the output resistance zero (output impedance of an ideal voltage source is zero). Other properties of the ideal buffer are: perfect linearity, regardless of signal amplitudes; and instant output response, regardless of the speed of the input signal. If the voltage is transferred unchanged (the voltage gain Av is 1), the amplifier is a unity gain buffer; also known as a voltage follower because the output voltage follows or tracks the input voltage. Although the voltage gain of a voltage buffer amplifier may be (approximately) unity, it usually provides considerable current gain and thus power gain. However, it is commonplace to say that it has a gain of 1 (or the equivalent 0 dB), referring to the voltage gain. As an example, consider a Thévenin source (voltage VA, series resistance RA) driving a resistor load RL. Because of voltage division (also referred to as "loading") the voltage across the load is only VA RL / ( RL + RA ). However, if the Thévenin source drives a un
https://en.wikipedia.org/wiki/Empirical%20orthogonal%20functions
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. The term is also interchangeable with the geographically weighted Principal components analysis in geophysics. The i th basis function is chosen to be orthogonal to the basis functions from the first through i − 1, and to minimize the residual variance. That is, the basis functions are chosen to be different from each other, and to account for as much variance as possible. The method of EOF analysis is similar in spirit to harmonic analysis, but harmonic analysis typically uses predetermined orthogonal functions, for example, sine and cosine functions at fixed frequencies. In some cases the two methods may yield essentially the same results. The basis functions are typically found by computing the eigenvectors of the covariance matrix of the data set. A more advanced technique is to form a kernel out of the data, using a fixed kernel. The basis functions from the eigenvectors of the kernel matrix are thus non-linear in the location of the data (see Mercer's theorem and the kernel trick for more information). See also Blind signal separation Multilinear PCA Multilinear subspace learning Nonlinear dimensionality reduction Orthogonal matrix Signal separation Singular spectrum analysis Transform coding Varimax rotation References and notes Further reading Bjornsson Halldor and Silvia A. Venegas "A manual for EOF and SVD analyses of climate data", McGill University, CCGCR Report No. 97-1, Montréal, Québec, 52pp., 1997. David B. Stephenson and Rasmus E. Benestad. "Environmental statistics for climate researchers". (See: "Empirical Orthogonal Function analysis") Christopher K. Wikle and Noel Cressie. "A dimension reduced approach to space-time Kalman filtering", Biometrika 86:815-829, 1999. Donald W. Denbo and John S. Allen. "Rotary Empirical
https://en.wikipedia.org/wiki/Message%20queue
In computer science, message queues and mailboxes are software-engineering components typically used for inter-process communication (IPC), or for inter-thread communication within the same process. They use a queue for messaging – the passing of control or of content. Group communication systems provide similar kinds of functionality. The message queue paradigm is a sibling of the publisher/subscriber pattern, and is typically one part of a larger message-oriented middleware system. Most messaging systems support both the publisher/subscriber and message queue models in their API, e.g. Java Message Service (JMS). Remit and ownership Message queues implement an asynchronous communication pattern between two or more processes/threads whereby the sending and receiving party do not need to interact with the message queue at the same time. Messages placed onto the queue are stored until the recipient retrieves them. Message queues have implicit or explicit limits on the size of data that may be transmitted in a single message and the number of messages that may remain outstanding on the queue. Remit Many implementations of message queues function internally within an operating system or within an application. Such queues exist for the purposes of that system only. Other implementations allow the passing of messages between different computer systems, potentially connecting multiple applications and multiple operating systems. These message queuing systems typically provide resilience functionality to ensure that messages do not get "lost" in the event of a system failure. Examples of commercial implementations of this kind of message queuing software (also known as message-oriented middleware) include IBM MQ (formerly MQ Series) and Oracle Advanced Queuing (AQ). There is a Java standard called Java Message Service, which has several proprietary and free software implementations. Real-time operating systems (RTOSes) such as VxWorks and QNX encourage the use of
https://en.wikipedia.org/wiki/Message-oriented%20middleware
Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating systems and network interfaces. APIs that extend across diverse platforms and networks are typically provided by MOM. This middleware layer allows software components (applications, Enterprise JavaBeans, servlets, and other components) that have been developed independently and that run on different networked platforms to interact with one another. Applications distributed on different network nodes use the application interface to communicate. In addition, by providing an administrative interface, this new, virtual system of interconnected applications can be made fault tolerant and secure. MOM provides software elements that reside in all communicating components of a client/server architecture and typically support asynchronous calls between the client and server applications. MOM reduces the involvement of application developers with the complexity of the master-slave nature of the client/server mechanism. Middleware categories Remote procedure call or RPC-based middleware Object request broker or ORB-based middleware Message-oriented middleware or MOM-based middleware All these models make it possible for one software component to affect the behavior of another component over a network. They are different in that RPC- and ORB-based middleware create systems of tightly coupled components, whereas MOM-based systems allow for a loose coupling of components. In an RPC- or ORB-based system, when one procedure calls another, it must wait for the called procedure to return before it can do anyt
https://en.wikipedia.org/wiki/Internet%20culture
Internet culture is a quasi-underground culture developed and maintained among frequent and active users of the Internet (netizens or digital citizens) who primarily communicate with one another online as members of online communities; that is, a culture whose influence is "mediated by computer screens" and Information Communication Technology, specifically the Internet. Internet culture arises from the frequent interactions between members within various online communities and the use of these communities for communication, entertainment, business, and recreation. The earliest online communities of this kind were centered around the interests and hobbies of anonymous and pseudonymous users who were early adopters of the Internet, typically those with academic, technological, highly niche, or even subversive interests. . The encompassing nature of the Internet culture has led to the study of its many different elements, such as anonymity/pseudonymity, social media, gaming and specific communities, and has also raised questions about online identity and Internet privacy. Overview Internet culture is a culture mostly endemic to anonymous or pseudonymous online communities and spaces. Due to the widespread adoption and growing use of the Internet, the impact of Internet culture on predominately offline societies and cultures has been extensive, and elements of Internet culture are increasingly impacting everyday life. Likewise, increasingly widespread adoption of the Internet has influenced Internet culture; frequently provoking fundamental shifts in Internet culture through shaming, censuring and censorship while pressuring other cultural expressions to go underground. Elements of Internet Culture While Internet subcultures differ, subcultures which emerged in the environment of the early Internet maintain a number of noticeably similar values, which manifest in similar ways. Macroculture Values Enlightenment principles are prominent values of Internet cult