source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Exact%20cover
In the mathematical field of combinatorics, given a collection of subsets of a set , an exact cover is a subcollection of such that each element in is contained in exactly one subset in . One says that each element in is covered by exactly one subset in . An exact cover is a kind of cover. In other words, is a partition of consisting of subsets contained in . The exact cover problem to find an exact cover is a kind of constraint satisfaction problem. The elements of represent choices and the elements of represent constraints. An exact cover problem involves the relation contains between subsets and elements. But an exact cover problem can be represented by any heterogeneous relation between a set of choices and a set of constraints. For example, an exact cover problem is equivalent to an exact hitting set problem, an incidence matrix, or a bipartite graph. In computer science, the exact cover problem is a decision problem to determine if an exact cover exists. The exact cover problem is NP-complete and is one of Karp's 21 NP-complete problems. It is NP-complete even when each subset in contains exactly three elements; this restricted problem is known as exact cover by 3-sets, often abbreviated X3C. Knuth's Algorithm X is an algorithm that finds all solutions to an exact cover problem. DLX is the name given to Algorithm X when it is implemented efficiently using Donald Knuth's Dancing Links technique on a computer. The exact cover problem can be generalized slightly to involve not only exactly-once constraints but also at-most-once constraints. Finding Pentomino tilings and solving Sudoku are noteworthy examples of exact cover problems. The N queens problem is a generalized exact cover problem. Formal definition Given a collection of subsets of a set , an exact cover of is a subcollection of that satisfies two conditions: The intersection of any two distinct subsets in is empty, i.e., the subsets in are pairwise disjoint. In other words, e
https://en.wikipedia.org/wiki/Channel%20length%20modulation
Channel length modulation (CLM) is an effect in field effect transistors, a shortening of the length of the inverted channel region with increase in drain bias for large drain biases. The result of CLM is an increase in current with drain bias and a reduction of output resistance. It is one of several short-channel effects in MOSFET scaling. It also causes distortion in JFET amplifiers. To understand the effect, first the notion of pinch-off of the channel is introduced. The channel is formed by attraction of carriers to the gate, and the current drawn through the channel is nearly a constant independent of drain voltage in saturation mode. However, near the drain, the gate and drain jointly determine the electric field pattern. Instead of flowing in a channel, beyond the pinch-off point the carriers flow in a subsurface pattern made possible because the drain and the gate both control the current. In the figure at the right, the channel is indicated by a dashed line and becomes weaker as the drain is approached, leaving a gap of uninverted silicon between the end of the formed inversion layer and the drain (the pinch-off region). As the drain voltage increases, its control over the current extends further toward the source, so the uninverted region expands toward the source, shortening the length of the channel region, the effect called channel-length modulation. Because resistance is proportional to length, shortening the channel decreases its resistance, causing an increase in current with increase in drain bias for a MOSFET operating in saturation. The effect is more pronounced the shorter the source-to-drain separation, the deeper the drain junction, and the thicker the oxide insulator. In the weak inversion region, the influence of the drain analogous to channel-length modulation leads to poorer device turn off behavior known as drain-induced barrier lowering, a drain induced lowering of threshold voltage. In bipolar devices, a similar increase in current
https://en.wikipedia.org/wiki/Class-T%20amplifier
Class T was a registered trademark for a switching (class-D) audio amplifier, used for Tripath's amplifier technologies (patent filed on Jun 20, 1996). Similar designs have now been widely adopted by different manufacturers. Amplifier The covered products use a class-D amplifier combined with proprietary techniques to control the pulse-width modulation to produce what is claimed to be better performance than other class-D amplifier designs. Among the publicly disclosed differences is real time control of the switching frequency depending on the input signal and amplified output. One of the amplifiers, the TA2020, was named one of the twenty-five chips that 'shook the world" by the IEEE Spectrum magazine. The control signals in Class T amplifiers may be computed using digital signal processing or fully analog techniques. Currently available implementations use a loop similar to a higher order Delta-Sigma (ΔΣ) (or sigma-delta) modulator, with an internal digital clock to control the sample comparator. The two key aspects of this topology are that (1), feedback is taken directly from the switching node rather than the filtered output, and (2), the higher order loop provides much higher loop gain at high audio frequencies than would be possible in a conventional single pole amplifier. Financial difficulties caused Tripath to file for Chapter 11 bankruptcy protection on 8 February 2007. Tripath's stock and intellectual property were purchased later that year by Cirrus Logic. Products and applications Tripath used to sell the amplifiers as chips, or as chipsets, to be integrated into products by other companies in several countries. For example: Sony, Panasonic and Blaupunkt use them in several car stereos and integrated home cinema systems Apple Computer used them in their Power Mac G4 Cube, Power Mac G4 (Digital audio), eMac and iMac (Flat Panel) computers Audio Research, an audio electronics company, formerly an exclusive tube circuit specialist, produced a Tr
https://en.wikipedia.org/wiki/Green%20wave
A green wave occurs when a series of traffic lights (usually three or more) are coordinated to allow continuous traffic flow over several intersections in one main direction. Any vehicle traveling along with the green wave (at an approximate speed decided upon by the traffic engineers) will see a progressive cascade of green lights, and not have to stop at intersections. This allows higher traffic loads, and reduces noise and energy use (because less acceleration and braking is needed). In practical use, only a group of cars (known as a "platoon", the size of which is defined by the signal times) can use the green wave before the time band is interrupted to give way to other traffic flows. The coordination of the signals is sometimes done dynamically, according to sensor data of currently existing traffic flows - otherwise it is done statically, by the use of timers. Under certain circumstances, green waves can be interwoven with each other, but this increases their complexity and reduces usability, so in conventional set-ups only the roads and directions with the heaviest loads get this preferential treatment. In 2011, a study modeled the implementation of green waves during the night in a busy Manchester suburb (Chorlton-cum-Hardy) using S-Paramics microsimulation and the AIRE emissions module. The results showed using green wave signal setups on a network have the potential to: Reduce , , and PM10 emissions from traffic. Reduce fuel consumption of vehicles. Be used on roads that intersect with other green waves. Reduce the time cars wait at side roads. Give pedestrians more time to cross at crossings and help them to cross streets as vehicles travel in platoons Control the speed of traffic in urban areas. Reduce component wear of vehicles and indirect energy consumption through their manufacture A green wave in both directions may be possible with different speed recommendations for each direction, otherwise traffic coming from one direction may re
https://en.wikipedia.org/wiki/Fuel%20fleas
Fuel fleas are microscopic hot particles of new or spent nuclear fuel. While small, they tend to be intensely radioactive. The fuel particles, the size about 10 micrometers, are a strong source of beta and gamma radiation and a weaker source of alpha radiation. The disparity between alpha and beta radiation (alpha activity is typically 100–1000 times weaker than beta, so the particle loses much more negatively charged particles than positively charged ones) leads to buildup of positive electrostatic charge on the particle, causing the particle to "jump" from surface to surface and easily become airborne. Fuel fleas are typically rich in uranium-238 and contain an abundance of insoluble fission products. Due to their high beta activity, they can be detected by a Geiger counter. Their gamma output can allow analysis of their isotope composition (and therefore their age and origin) by a gamma-ray spectrometer. Fuel fleas can be very dangerous if they become embedded within a person's body, but are generally not considered more dangerous than an equal amount of radioactive material evenly distributed throughout the body. An exception would be if the flea was embedded in a particularly vulnerable organ such as the cornea of the eye or inhaled into the lungs. The most likely cause of fuel fleas is when the cladding surrounding the nuclear fuel becomes ruptured or cracked (known as "fuel pin failure"), allowing the fuel particles to escape and allowing the coolant to enter the fuel rod, further accelerating the process. In water-cooled reactors, this can be due to the reaction of the zirconium alloy cladding with the cooling water, which produces hydrogen. The hydrogen can be absorbed into the cladding material, resulting in hydrogen embrittlement. Embrittled cladding is less ductile and more susceptible to cracking. This process is avoided in modern reactors by carefully monitoring the fuel assemblies, limiting operating lifetime of the fuel, and by using alloys develo
https://en.wikipedia.org/wiki/Superior%20thoracic%20aperture
The superior thoracic aperture, also known as the thoracic outlet, or thoracic inlet refers to the opening at the top of the thoracic cavity. It is also clinically referred to as the thoracic outlet, in the case of thoracic outlet syndrome. A lower thoracic opening is the inferior thoracic aperture. Structure The superior thoracic aperture is essentially a hole surrounded by a bony ring, through which several vital structures pass. It is bounded by: the first thoracic vertebra (T1) posteriorly; the first pair of ribs laterally, forming lateral C-shaped curves posterior to anterior; and the costal cartilage of the first rib and the superior border of the manubrium anteriorly. Dimensions The adult thoracic outlet is around 6.5 cm antero-posteriorly and 11 cm transversely. Because of the obliquity of the first pair of ribs, the aperture slopes antero-inferiorly. Relations The clavicle articulates with the manubrium to form the anterior border of the thoracic outlet. Above the superior thoracic outlet is the root of the neck, and the superior mediastinum is inferiorly related. The brachial plexus is a superolateral relation of the thoracic outlet. The brachial plexus emerges between the anterior and middle scalene muscles, superior to the first rib, and passes obliquely and inferiorly, underneath the clavicle, into the shoulder and then the arm. Impingement of the plexus in the region of the scalenes, ribs, and clavicles is responsible for thoracic outlet syndrome. Function Structures that pass through the thoracic inlet include: trachea oesophagus thoracic duct apices of the lungs nerves phrenic nerve vagus nerve recurrent laryngeal nerves sympathetic trunks vessels arteries left and right common carotid arteries left subclavian arteries veins internal jugular veins brachiocephalic veins subclavian veins lymph nodes and lymphatic vessels This is not an exhaustive list. There are several other minor, but important, vessels and nerves passing
https://en.wikipedia.org/wiki/Algorithmic%20information%20theory
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously." Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine. AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference
https://en.wikipedia.org/wiki/Profundal%20zone
The profundal zone is a deep zone of an inland body of freestanding water, such as a lake or pond, located below the range of effective light penetration. This is typically below the thermocline, the vertical zone in the water through which temperature drops rapidly. The temperature difference may be large enough to hamper mixing with the littoral zone in some seasons which causes a decrease in oxygen concentrations. The profundal is often defined, as the deepest, vegetation-free, and muddy zone of the lacustrine benthal. The profundal zone is often part of the aphotic zone. Sediment in the profundal zone primarily comprises silt and mud. Organisms The lack of light and oxygen in the profundal zone determines the type of biological community that can live in this region, which is distinctly different from the community in the overlying waters. The profundal macrofauna is therefore characterized by physiological and behavioural adaptations to low oxygen concentration. While benthic fauna differs between lakes, Chironomidae and Oligochaetae often dominate the benthic fauna of the profundal zone because they possess hemoglobin-like molecules to extract oxygen from poorly oxygenated water. Due to the low productivity of the profundal zone, organisms rely on detritus sinking from the photic zone. Species richness in the profundal zone is often similar to that in the limnetic zone. Microbial levels in the profundal benthos are higher than those in the littoral benthos, potentially due to a smaller average sediment particle size. Benthic macroinvertebrates are believed to be regulated by top-down pressure. Nutrient cycling Nutrient fluxes in the profundal zone are primarily driven by release from the benthos. The anoxic nature of the profundal zone drives ammonia release from benthic sediment. This can drive phytoplankton production, to the point of a phytoplankton bloom, and create toxic conditions for many organisms, particularly at a high pH. Hypolimnetic anoxia can
https://en.wikipedia.org/wiki/Knightmare%20%281986%20video%20game%29
Knightmare is a 1986 vertically scrolling shooter video game developed and published by Konami for the MSX home computer. It was included in compilations for the MSX, PlayStation and Sega Saturn, followed by a port for mobile phones, and digital re-releases for the Virtual Console and Microsoft Windows. It is the first entry in the Knightmare trilogy. The game stars Popolon, a warrior who embarks on a quest to rescue the princess Aphrodite from the evil priest Hudnos. The player must fight waves of enemies while avoiding collision with their projectiles and obstacles along the way, and facing against bosses. Knightmare was created by the MSX division at Konami under management of Shigeru Fukutake. The character of Popolon was conceived by a staffer who later became the project's lead designer and writer, as the process of making original titles for the platform revolved around the person who came up with the characters. Development proceeded with a team of four or five members, lasting somewhere between four and six months. The music was scored by Miki Higashino, best known for her work in the Gradius and Suikoden series, and Yoshinori Sasaki. Knightmare proved popular among Japanese players, garnering generally positive reception from critics and retrospective commentarists. It was followed by The Maze of Galious and Shalom: Knightmare III (1987), while Popolon and Aphrodite would later make appearances outside of the trilogy in other Konami titles. In the years since, fans have experimented with remaking and porting the title unofficially to other platforms. Gameplay Knightmare is a vertical-scrolling shoot 'em up game starring Popolon, a warrior who embarks on a quest to rescue the princess Aphrodite from the evil priest Hudnos. The player controls Popolon through eight increasingly difficult stages across a Greek-esque fantasy setting, populated with an assortment of enemies and obstacles, over a constantly scrolling background that never stops moving unti
https://en.wikipedia.org/wiki/Philip%20Wadler
Philip Lee Wadler (born April 8, 1956) is a UK-based American computer scientist known for his contributions to programming language design and type theory. He is the chair of theoretical computer science at the Laboratory for Foundations of Computer Science at the School of Informatics, University of Edinburgh. He has contributed to the theory behind functional programming and the use of monads; and the designs of the purely functional language Haskell and the XQuery declarative query language. In 1984, he created the Orwell language. Wadler was involved in adding generic types to Java 5.0. He is also author of "Theorems for free!", a paper that gave rise to much research on functional language optimization (see also Parametricity). Education Wadler received a Bachelor of Science degree in mathematics from Stanford University in 1977, and a Master of Science degree in computer science from Carnegie Mellon University in 1979. He completed his Doctor of Philosophy in computer science at Carnegie Mellon University in 1984. His thesis was entitled "Listlessness is better than laziness" and was supervised by Nico Habermann. Research and career Wadler's research interests are in programming languages. Wadler was a research fellow at the Programming Research Group (part of the Oxford University Computing Laboratory) and St Cross College, Oxford during 1983–87. He was progressively lecturer, reader, and professor at the University of Glasgow from 1987 to 1996. Wadler was a member of technical staff at Bell Labs, Lucent Technologies (1996–99) and then at Avaya Labs (1999–2003). Since 2003, he has been professor of theoretical computer science in the School of Informatics at the University of Edinburgh. Wadler was editor of the Journal of Functional Programming from 1990 to 2004. Since 2003, Wadler has been a professor of theoretical computer science at the Laboratory for Foundations of Computer Science at the University of Edinburgh and is the chair of theoretical com
https://en.wikipedia.org/wiki/List%20of%20PDF%20software
This is a list of links to articles on software used to manage Portable Document Format (PDF) documents. The distinction between the various functions is not entirely clear-cut; for example, some viewers allow adding of annotations, signatures, etc. Some software allows redaction, removing content irreversibly for security. Extracting embedded text is a common feature, but other applications perform optical character recognition (OCR) to convert imaged text to machine-readable form, sometimes by using an external OCR module. Terminology Creators – to allow users to convert other file formats to PDF. Readers – to allow users to open, read and print PDF files. Editors – to allow users to edit or otherwise modify PDF files. Converters – to allow users to convert PDF files to other formats. Multi-platform Development libraries These are used by software developers to add and create PDF features. Creators These create files in their native formats, but then allow users to export them to PDF formats. Viewers These allow users to view (not edit or modify) any existing PDF file. AmigaOS Converters Antiword: A free Microsoft Office Word reader for various operating systems; converts binary files from Word 2, 6, 7, 97, 2000, 2002 and 2003 to plain text or PostScript; available for AmigaOS 4, MorphOS, AROS x86 dvipdfm: a DVI to PDF translator with zlib support Viewers Xpdf: a multi-platform viewer for PDF files, Amiga version uses X11 engine Cygnix. Linux and Unix Converters Collabora Online can be used as a web application, a command line tool, or a Java/Python library. Supported formats include OpenDocument, PDF, HTML, Microsoft Office formats (DOC/DOCX/RTF, XLS/XLSX, PPT/PPTX) and others. Creators, editors and viewers macOS Converters deskUNPDF for Mac: proprietary application from Docudesk to convert PDF files to Microsoft Office, LibreOffice, image, and data file formats Creators macOS: Creates PDF documents natively via print dialog Editors Adobe
https://en.wikipedia.org/wiki/The%20Goonies%20%28MSX%20video%20game%29
The Goonies is a 1986 platform game by Konami for the MSX based on the film of the same name. The music is a simple rendition of the song "The Goonies 'R' Good Enough", by Cyndi Lauper. Gameplay The Goonies is a platform and puzzle game, featuring five 'scenes'. After each successfully completed scene, a key word is given and thus the player can continue the game from this point at any time.
https://en.wikipedia.org/wiki/Protonema
A protonema (plural: protonemata) is a thread-like chain of cells that forms the earliest stage of development of the gametophyte (the haploid phase) in the life cycle of mosses. When a moss first grows from a spore, it starts as a germ tube, which lengthens and branches into a filamentous complex known as a protonema, which develops into a leafy gametophore, the adult form of a gametophyte in bryophytes. Moss spores germinate to form an alga-like filamentous structure called the protonema. It represents the juvenile gametophyte. While the protonema is growing by apical cell division, at some stage, under the influence of the phytohormone cytokinin, buds are induced which grow by three-faced apical cells. These give rise to gametophores, stems and leaf like structures. Bryophytes do not have true leaves (megaphylls). Protonemata are characteristic of all mosses and some liverworts but are absent from hornworts. Protonemata of mosses are composed of two cell types: chloronemata, which form upon germination, and caulonemata, which later differentiate from chloronemata and on which buds are formed, which then differentiate to gametophores.
https://en.wikipedia.org/wiki/JSP%20model%201%20architecture
In the design of Java Web applications, there are two commonly used design models, referred to as Model 1 and Model 2. In Model 1, a request is made to a JSP or servlet and then that JSP or servlet handles all responsibilities for the request, including processing the request, validating data, handling the business logic, and generating a response. The Model 1 architecture is commonly used in smaller, simple task applications due to its ease of development. Although conceptually simple, this architecture is not conducive to large-scale application development because, inevitably, a great deal of functionality is duplicated in each JSP. Also, the Model 1 architecture unnecessarily ties together the business logic and presentation logic of the application. Combining business logic with presentation logic makes it hard to introduce a new 'view' or access point in an application. For example, in addition to an HTML interface, you might want to include a Wireless Markup Language (WML) interface for wireless access. In this case, using Model 1 will unnecessarily require the duplication of the business logic with each instance of the presentation code.
https://en.wikipedia.org/wiki/King%27s%20Valley%20II
King's Valley II: The Seal of El Giza is a game for MSX1 and MSX2 computers by Konami. It is a sequel to King's Valley from 1985. The MSX2 version only saw a release in Japan. The same goes for a very rare "contest" version. The contest was about making levels with the games' built-in level editor, held by four Japanese MSX magazines, two of them are MSX.FAN and Beep. The winners of this contest received a gold cartridge with the twenty custom stages on it. Custom levels can be saved to either a disk or tape, and the levels are interchangeable between both the MSX1 and MSX2 versions. Story Far, far into the future, inter-planetary archaeologist Vick XIII, makes a choking discovery. The pyramids on earth are malfunctioning devices of alien origin with enough energy to destroy earth. And it's up to Vick to switch off the core functions of El Giza. Gameplay The game consists of six pyramids each with its own wall engravings and color pattern; every pyramid contains 10 levels. The idea of the game is to collect crystals called soul stones in each level by solving the different puzzles and evading or killing the enemies using the many tools and weapons available to unlock the exit door that will take you to the next level. Versions The later Konami game Castlevania: Portrait of Ruin for the Nintendo DS reuses the stage musics "In Search of the Secret Spell" and "Sandfall" for the Egyptian area of the game. The MSX 2 version was the same game except minor changes like the music was remixed and some of the items and backgrounds recolored. Castlevania: Harmony of Despair uses a remix of the Stage Clear theme as the Stage Clear theme for Chapter 7: Beauty, Desire, Situation Dire (not found on the OST).
https://en.wikipedia.org/wiki/Compunet
Compunet was a United Kingdom-based interactive service provider, catering primarily for the Commodore 64 but later for the Amiga and Atari ST. It was also known by its users as CNet. It ran from 1984 to May 1993. Overview Compunet hosted a wide range of content, and users were permitted to create their own sections within which they could upload their own graphics, articles and software. A custom editor existed in which the "frames" that made up the pages could be created either offline or when connected to the service. The editor's cache allowed users to quickly download a set of pages, then disconnect from the service in order to read them, thus saving on telephone costs. The user interface used a horizontally scrolling menu system, known as the "duck shoot", and navigation was essentially "select and click" with the ability to jump directly to pages with the use of keywords. Content could be voted upon by the users. The service had many features which were considerably ahead of its time, especially when compared to the Internet of today: Pricing of content (Optional. Users could price their own content). Voting on content quality. "Upload anywhere" of content: programs, graphics and text (Unless a section was protected). Software could be dongle protected (the custom modem doubled as the dongle in this instance). WYSIWYG editing of content. Chat room (known as Partyline), which allowed users to create their own rooms (similar principles have been shown in IRC). The server hosted Multi-User Dungeon (MUD) (by Richard Bartle), Federation II, and Realm. The first two of these games continue to run on the Internet today. Games creator Jeff Minter and musician Rob Hubbard, along with various members of the demo scene, had a presence on the network. History In 1982, Commodore UK decided to construct a nationwide computer network for the use of teachers. The Commodore PET computer had been very successful. Nick Green developed the specification of
https://en.wikipedia.org/wiki/Transport%20triggered%20architecture
In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators. Typically a transport triggered processor has multiple transport buses and multiple functional units connected to the buses, which provides opportunities for instruction level parallelism. The parallelism is statically defined by the programmer. In this respect (and obviously due to the large instruction word width), the TTA architecture resembles the very long instruction word (VLIW) architecture. A TTA instruction word is composed of multiple slots, one slot per bus, and each slot determines the data transport that takes place on the corresponding bus. The fine-grained control allows some optimizations that are not possible in a conventional processor. For example, software can transfer data directly between functional units without using registers. Transport triggering exposes some microarchitectural details that are normally hidden from programmers. This greatly simplifies the control logic of a processor, because many decisions normally done at run time are fixed at compile time. However, it also means that a binary compiled for one TTA processor will not run on another one without recompilation if there is even a small difference in the architecture between the two. The binary incompatibility problem, in addition to the complexity of implementing a full context switch, makes TTAs more suitable for embedded systems than for general purpos
https://en.wikipedia.org/wiki/Cryptogam
A cryptogam (scientific name Cryptogamae) is a plant (in the wide sense of the word) or a plant-like organism that reproduces by spores, without flowers or seeds. The name Cryptogamae () means "hidden reproduction", referring to the fact that no seed is produced, thus cryptogams represent the non-seed bearing plants. Other names, such as "thallophytes", "lower plants", and "spore plants" are also occasionally used. As a group, Cryptogamae are the opposite of the Phanerogamae () or Spermatophyta (), the seed plants. The best-known groups of cryptogams are algae, lichens, mosses, and ferns, but it also includes non-photosynthetic organisms traditionally classified as plants, such as fungi, slime molds, and bacteria. The classification is now deprecated in Linnaean taxonomy. Cryptogams have been classified into three sub-kingdoms: Thallophyta (thallophyte), Bryophyta (bryophyte), and Pteridophya (pteridophyte). At one time, the cryptogams were formally recognised as a group within the plant kingdom. In his system for classification of all known plants and animals, Carl Linnaeus (1707–1778) divided the plant kingdom into 24 classes, one of which was the "Cryptogamia". This included all plants with concealed reproductive organs. He divided Cryptogamia into four orders: Algae, Musci (bryophytes), Filices (ferns), and Fungi. Not all cryptogams are treated as part of the plant kingdom today; the fungi, in particular, are regarded as a separate kingdom, more closely related to animals than plants, while blue-green algae are now regarded as a phylum of bacteria. Therefore, in contemporary plant systematics, "Cryptogamae" is not a taxonomically coherent group, but is cladistically polyphyletic. However, all organisms known as cryptogams belong to the field traditionally studied by botanists and the names of all cryptogams are regulated by the International Code of Nomenclature for algae, fungi, and plants. During World War II, the British Government Code and Cypher School r
https://en.wikipedia.org/wiki/Traffic%20flow
In mathematics and transportation engineering, traffic flow is the study of interactions between travellers (including pedestrians, cyclists, drivers, and their vehicles) and infrastructure (including highways, signage, and traffic control devices), with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems. History Attempts to produce a mathematical theory of traffic flow date back to the 1920s, when American Economist Frank Knight first produced an analysis of traffic equilibrium, which was refined into Wardrop's first and second principles of equilibrium in 1952. Nonetheless, even with the advent of significant computer processing power, to date there has been no satisfactory general theory that can be consistently applied to real flow conditions. Current traffic models use a mixture of empirical and theoretical techniques. These models are then developed into traffic forecasts, and take account of proposed local or major changes, such as increased vehicle use, changes in land use or changes in mode of transport (with people moving from bus to train or car, for example), and to identify areas of congestion where the network needs to be adjusted. Overview Traffic behaves in a complex and nonlinear way, depending on the interactions of a large number of vehicles. Due to the individual reactions of human drivers, vehicles do not interact simply following the laws of mechanics, but rather display cluster formation and shock wave propagation, both forward and backward, depending on vehicle density. Some mathematical models of traffic flow use a vertical queue assumption, in which the vehicles along a congested link do not spill back along the length of the link. In a free-flowing network, traffic flow theory refers to the traffic stream variables of speed, flow, and concentration. These relationships are mainly concerned with uninterrupted traffic flow, primarily found on fr
https://en.wikipedia.org/wiki/Postharvest
In agriculture, postharvest handling is the stage of crop production immediately following harvest, including cooling, cleaning, sorting and packing. The instant a crop is removed from the ground, or separated from its parent plant, it begins to deteriorate. Postharvest treatment largely determines final quality, whether a crop is sold for fresh consumption, or used as an ingredient in a processed food product. Goals The most important goals of post-harvest handling are keeping the product cool, to avoid moisture loss and slow down undesirable chemical changes, and avoiding physical damage such as bruising, to delay spoilage. Sanitation is also an important factor, to reduce the possibility of pathogens that could be carried by fresh produce, for example, as residue from contaminated washing water. After the field, post-harvest processing is usually continued in a packing house. This can be a simple shed, providing shade and running water, or a large-scale, sophisticated, mechanised facility, with conveyor belts, automated sorting and packing stations, walk-in coolers and the like. In mechanised harvesting, processing may also begin as part of the actual harvest process, with initial cleaning and sorting performed by the harvesting machinery. Initial post-harvest storage conditions are critical to maintaining quality. Each crop has an optimum range of storage temperature and humidity. Also, certain crops cannot be effectively stored together, as unwanted chemical interactions can result. Various methods of high-speed cooling, and sophisticated refrigerated and atmosphere-controlled environments, are employed to prolong freshness, particularly in large-scale operations. Postharvest shelf life Once harvested, vegetables and fruits are subject to the active process of degradation. Numerous biochemical processes continuously change the original composition of the crop until it becomes unmarketable. The period during which consumption is considered acceptable is def
https://en.wikipedia.org/wiki/Router%20on%20a%20stick
A router on a stick, also known as a one-armed router, is a router that has a single physical or logical connection to a network. It is a method of inter-VLAN routing where one router is connected to a switch via a single cable. The router has physical connections to the broadcast domains where one or more VLANs require the need for routing between them. Devices on separate VLANs or in a typical local area network are unable to communicate with each other. Therefore, it is often used to forward traffic between locally attached hosts on separate logical routing domains or to facilitate routing table administration, distribution and relay. Details One-armed routers that perform traffic forwarding are often implemented on VLANs. They use a single Ethernet network interface port that is part of two or more Virtual LANs, enabling them to be joined. A VLAN allows multiple virtual LANs to coexist on the same physical LAN. This means that two machines attached to the same switch cannot send Ethernet frames to each other even though they pass over the same wires. If they need to communicate, then a router must be placed between the two VLANs to forward packets, just as if the two LANs were physically isolated. The only difference is that the router in question may contain only a single Ethernet network interface controller (NIC) that is part of both VLANs. Hence, "one-armed". While uncommon, hosts on the same physical medium may be assigned with addresses and to different networks. A one-armed router could be assigned addresses for each network and be used to forward traffic between locally distinct networks and to remote networks through another gateway. One-armed routers are also used for administration purposes such as route collection, multi hop relay and looking glass servers. All traffic goes over the trunk twice, so the theoretical maximum sum of up and download speed is the line rate. For a two-armed configuration, uploading does not need to impact downloa
https://en.wikipedia.org/wiki/Helium%20mass%20spectrometer
A helium mass spectrometer is an instrument commonly used to detect and locate small leaks. It was initially developed in the Manhattan Project during World War II to find extremely small leaks in the gas diffusion process of uranium enrichment plants. It typically uses a vacuum chamber in which a sealed container filled with helium is placed. Helium leaks out of the container, and the rate of the leak is detected by a mass spectrometer. Detection technique Helium is used as a tracer because it penetrates small leaks rapidly. Helium also has the properties of being non-toxic, chemically inert and present in the atmosphere only in minute quantities (5 ppm). Typically a helium leak detector will be used to measure leaks in the range of 10 to 10 Pa·m·s. A flow of 10 Pa·m·s is about 0.006 ml per minute at standard conditions for temperature and pressure (STP). A flow of 10 Pa·m·s is about 0.003 ml per century at STP. Types of leaks Typically there are two types of leaks in the detection of helium as a tracer for leak detection: residual leak and virtual leak. A residual leak is a real leak due to an imperfect seal, a puncture, or some other hole in the system. A virtual leak is the semblance of a leak in a vacuum system caused by outgassing of chemicals trapped or adhered to the interior of a system that is actually sealed. As the gases are released into the chamber, they can create a false positive indication of a residual leak in the system. Uses Helium mass spectrometer leak detectors are used in production line industries such as refrigeration and air conditioning, automotive parts, carbonated beverage containers food packages and aerosol packaging, as well as in the manufacture of steam products, gas bottles, fire extinguishers, tire valves, and numerous other products including all vacuum systems. Test methods Global helium spray This method requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tes
https://en.wikipedia.org/wiki/Green%20tea%20ice%20cream
or matcha ice (抹茶アイス Matcha aisu) is an ice cream flavor popular in Japan and other parts of East Asia. Green tea ice cream is also sold in monaka form. It has been available in the United States since the late-1970s, primarily in Japanese restaurants and markets, but is currently moving into mainstream availability. Background There is a clear indication that Mount Fuji-shaped green tea ice cream was an item on the menu at the royal dinner party during the Meiji period (1868–1912). The true origin of green tea ice cream, however, is unknown. Although green tea itself seems to have existed as local handmade ice cream at some districts in Japan, none of the Japanese flavored ice creams were merchandised until the 1990s because the major Japanese ice cream manufacturers were producing vanilla, strawberry and chocolate. However, green tea shaved ice has been well known and popular in Japan long before green tea ice cream. The amount of imported ice cream increased in the Japanese market after the import liberalization act of ice cream in 1990. Sales of green tea ice cream in Japan began with the importation of green tea ice cream from Maeda-en USA in California with the catchphrase "Pure Japanese style made from California". It has been produced since April 1995, using fresh California milk made in United States The same product was soon imported and distributed to convenience stores and supermarkets in Japan as well and it was introduced in some Japanese newspapers. During a certain period of the 1980s in Japan, Meiji Dairies started selling its green tea ice cream with Lady Borden Brand but eventually discontinued selling the product. Häagen-Dazs Japan started producing green tea ice cream in 1996. The product is now sold in Japanese grocery markets and has become one of the company's most popular flavours. Statistics from the Japanese Ice Cream Association show that green tea ice cream was ranked third in the "Favourite Ice Cream Flavour" study. In order to p
https://en.wikipedia.org/wiki/Vendor%20Independent%20Messaging
VIM (Vendor Independent Messaging) was a standard API for applications to integrate with e-mail on Windows 3.x, proposed by Lotus, Borland, IBM & Novell in the early 1990s. Its main competitor was Microsoft's MAPI, which was the eventual winner of the MAPI v. VIM war. Ultimately, the choice of VIM or MAPI did not make a huge difference: bridges meant that an MAPI client could access a VIM provider and vice versa, and the rise of Internet e-mail in the mid-1990s rendered the panoply of proprietary e-mail systems which VIM and MAPI were meant to cater to largely irrelevant. Email
https://en.wikipedia.org/wiki/Bit%20manipulation
Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions. Source code that does bit manipulation makes use of the bitwise operations: AND, OR, XOR, NOT, and possibly other operations analogous to the boolean operators; there are also bit shifts and operations to count ones and zeros, find high and low one or zero, set, reset and test bits, extract and insert fields, mask and zero fields, gather and scatter bits to and from specified bit positions or fields. Integer arithmetic operators can also effect bit-operations in conjunction with the other operators. Bit manipulation, in some cases, can obviate or reduce the need to loop over a data structure and can give manyfold speed-ups, as bit manipulations are processed in parallel. Terminology Bit twiddling, bit fiddling, bit bashing, and bit gymnastics are often used interchangeably with bit manipulation, but sometimes exclusively refer to clever or non-obvious ways or uses of bit manipulation, or tedious or challenging low-level device control data manipulation tasks. The term bit twiddling dates from early computing hardware, where computer operators would make adjustments by tweaking or twiddling computer controls. As computer programming languages evolved, programmers adopted the term to mean any handling of data that involved bit-level computation. Bitwise operation A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the central processing unit (CPU), and is used to manipulate values for comparis
https://en.wikipedia.org/wiki/Municipal%20wireless%20network
A municipal wireless network is a citywide wireless network. This usually works by providing municipal broadband via Wi-Fi to large parts or all of a municipal area by deploying a wireless mesh network. The typical deployment design uses hundreds of wireless access points deployed outdoors, often on poles. The operator of the network acts as a wireless internet service provider. Overview Municipal wireless networks go far beyond the existing piggybacking opportunities available near public libraries and some coffee shops. The basic premise of carpeting an area with wireless service in urban centers is that it is more economical to the community to provide the service as a utility rather than to have individual households and businesses pay private firms for such a service. Such networks are capable of enhancing city management and public safety, especially when used directly by city employees in the field. They can also be a social service to those who cannot afford private high-speed services. When the network service is free and a small number of clients consume a majority of the available capacity, operating and regulating the network might prove difficult. In 2003, Verge Wireless formed an agreement with Tropos Networks to build a municipal wireless networks in the downtown area of Baton Rouge, Louisiana. Carlo MacDonald, the founder of Verge Wireless, suggested that it could provide cities a way to improve economic development and developers to build mobile applications that can make use of faster bandwidth. Verge Wireless built networks for Baton Rouge, New Orleans, and other areas. Some applications include wireless security cameras, police mug shot software, and location-based advertising. In 2007, some companies with existing cell sites offered high-speed wireless services where the laptop owner purchased a PC card or adapter based on EV-DO cellular data receivers or WiMAX rather than 802.11b/g. A few high-end laptops at that time featured built-in su
https://en.wikipedia.org/wiki/Degustation
Dégustation is the careful, appreciative tasting of various food, focusing on the gustatory system, the senses, high culinary art and good company. Dégustation is more likely to involve sampling small portions of all of a chef's signature dishes in one sitting. Usually consisting of many courses, it may be accompanied by a matching wine degustation which complements each dish. History The French term dégustation is still commonly used in English-language contexts. Modern dégustation probably comes from the French kitchens of the early 20th century and is different from earlier meals with many courses because these meals were served as full-sized meals at each course. Examples Sampling a selection of cheeses, at home or in a restaurant, may also be called a dégustation. Three to four varieties are normally chosen, generally including a semi-soft cheese, a goat's cheese, and a blue cheese. The stronger varieties are normally tasted last. A six course dégustation may include two seafood, red meat and dessert items with matching wines while the same menu could have added a vegetarian item, and any other types of dish to expand the menu to (for example) a nine-course dégustation menu. The popular Spanish style of tapas is similar to the dégustation style, but is not in itself a complete set menu offering the chefs' signature dishes, but instead offers a variety from which the diner can choose. See also Tasting menu Formal dinner Wine tasting
https://en.wikipedia.org/wiki/SAFARI
SAFARI was an attempt by the French government, under the presidency of Georges Pompidou, to create a centralized database of personal data. SAFARI stands for Système Automatisé pour les Fichiers Administratifs et le Répertoire des Individus, "Automated System for Administrative Files and the Repertory of Individuals". History The first mention of the project was made in a three-page article in the INSEE central review in March 1970. The French government began secretly working on the SAFARI project in 1973. The project aimed to identify French citizens with a unique number that would connect the information about them from various databases. In particular, it would use the INSEE code (also used as a Social Security number). The system was to be based on the Iris-80 computer. On March 21, 1974, an article in the newspaper Le Monde by journalist Philippe Boucher revealed the existence of the project. The public outcry was immense, some critics comparing it to the national identity database created by the Vichy regime during Nazi occupation. The massive popular rejection of SAFARI prompted the minister of justice to create the Commission on Data Processing and Freedom, also known as the Tricot Commission after its leader Bernard Tricot. This led to the creation of the CNIL to ensure data privacy, as well as an accompanying 1978 law, the Data Protection and Liberties Act, restricting the storage and processing of personal data.
https://en.wikipedia.org/wiki/Experimental%20testing%20of%20time%20dilation
Time dilation as predicted by special relativity is often verified by means of particle lifetime experiments. According to special relativity, the rate of a clock C traveling between two synchronized laboratory clocks A and B, as seen by a laboratory observer, is slowed relative to the laboratory clock rates. Since any periodic process can be considered a clock, the lifetimes of unstable particles such as muons must also be affected, so that moving muons should have a longer lifetime than resting ones. A variety of experiments confirming this effect have been performed both in the atmosphere and in particle accelerators. Another type of time dilation experiments is the group of Ives–Stilwell experiments measuring the relativistic Doppler effect. Atmospheric tests Theory The emergence of the muons is caused by the collision of cosmic rays with the upper atmosphere, after which the muons reach Earth. The probability that muons can reach the Earth depends on their half-life, which itself is modified by the relativistic corrections of two quantities: a) the mean lifetime of muons and b) the length between the upper and lower atmosphere (at Earth's surface). This allows for a direct application of length contraction upon the atmosphere at rest in inertial frame S, and time dilation upon the muons at rest in S′. Time dilation and length contraction Length of the atmosphere: The contraction formula is given by , where L0 is the proper length of the atmosphere and L its contracted length. As the atmosphere is at rest in S, we have γ=1 and its proper Length L0 is measured. As it is in motion in S′, we have γ>1 and its contracted length L′ is measured. Decay time of muons: The time dilation formula is , where T0 is the proper time of a clock comoving with the muon, corresponding with the mean decay time of the muon in its proper frame. As the muon is at rest in S′, we have γ=1 and its proper time T′0 is measured. As it is moving in S, we have γ>1, therefore its proper ti
https://en.wikipedia.org/wiki/St-connectivity
In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s. Formally, the decision problem is given by . Complexity On a sequential computer, st-connectivity can easily be solved in linear time by either depth-first search or breadth-first search. The interest in this problem in computational complexity concerns its complexity with respect to more limited forms of computation. For instance, the complexity class of problems that can be solved by a non-deterministic Turing machine using only a logarithmic amount of memory is called NL. The st-connectivity problem can be shown to be in NL, as a non-deterministic Turing machine can guess the next node of the path, while the only information which has to be stored is the total length of the path and which node is currently under consideration. The algorithm terminates if either the target node t is reached, or the length of the path so far exceeds n, the number of nodes in the graph. The complement of st-connectivity, known as st-non-connectivity, is also in the class NL, since NL = coNL by the Immerman–Szelepcsényi theorem. In particular, the problem of st-connectivity is actually NL-complete, that is, every problem in the class NL is reducible to connectivity under a log-space reduction. This remains true for the stronger case of first-order reductions . The log-space reduction from any language in NL to STCON proceeds as follows: Consider the non-deterministic log-space Turing machine M that accepts a language in NL. Since there is only logarithmic space on the work tape, all possible states of the Turing machine (where a state is the state of the internal finite state machine, the position of the head and the contents of the work tape) are polynomially many. Map all possible states of the deterministic log-space machine to vertices of a graph, and put an edge between u and v if the state v can be reached from u within one step o
https://en.wikipedia.org/wiki/Virgil%20of%20Salzburg
Virgil (– 27 November 784), also spelled Vergil, Vergilius, Virgilius, Feirgil or Fearghal, was an Irish priest and early astronomer. He left Ireland around 745, intending to visit the Holy Land; but, like many of his countrymen, he settled in Francia. Virgil served as abbot of Aghaboe, bishop of Ossory and later bishop of Salzburg. He was called "the Apostle of Carinthia" and "the geometer". Biography He originated from a noble family of Ireland, where his name was Feirgil or Fearghal, and is said to have been a descendant of Niall of the Nine Hostages. Feirgil was probably educated at the Iona monastery. In Annals of the Four Masters and Annals of Ulster, he is referenced as the Abbot of Aghaboe, in County Laois, where he was known as "the Geometer" because of his knowledge of geography. Around 745, he left Ireland, intending to visit the Holy Land; but, like many of his countrymen, who seemed to have adopted this practice as a work of piety, he settled down in France, where he was received with great favour by Pippin the Younger, who was then Mayor of the Palace under Childeric III of Franconia. He was an adviser to Pippin. He probably used a copy of the Collectio canonum Hibernensis (an Irish collection of canon law) to advise him to receive royal unction in 751, to assist his recognition as king Pippin III after the deposition of Childeric. After spending two years at Cressy, near Compiègne, he went to Bavaria, at the invitation of Duke Odilo, where he founded the monastery of Chiemsee, and within a year or two, was made Abbot of St Peter's Abbey at Salzburg. Among his notable accomplishments was the conversion of the Alpine Slavs to Christianity; he also sent missionaries to Hungary. As Abbot of St Peter's, he clashed with Saint Boniface. A priest having, through ignorance, conferred the Sacrament of Baptism using, in place of the correct formula, the words "Baptizo te in nomine patria et filia et spiritu sancta" (instead of "Baptizo te in nomine patris et
https://en.wikipedia.org/wiki/Markman%20v.%20Westview%20Instruments%2C%20Inc.
Markman v. Westview Instruments, Inc., 517 U.S. 370 (1996), is a United States Supreme Court case on whether the interpretation of patent claims is a matter of law or a question of fact. An issue designated as a matter of law is resolved by the judge, and an issue construed as a question of fact is determined by the jury. Background Herbert Markman patented a system to track clothes through the dry cleaning process using barcode to generate receipts and track inventory. The 7th Amendment guarantees the right to a jury trial in patent infringement cases. The 7th Amendment preserves the right to a jury trial as it existed in 1791. There is no dispute that infringement cases today must be tried by a jury as their predecessors were in 1791. However, the court held that the construction of the patent, including the terms of art within its claim, is exclusively within the court's province. In general, the effectiveness of a particular patent depends on its potential to block competitors. The key for a patent holder is getting the proper definition of words used in the patent to allow blocking of the particular troublesome competitive product. Before this decision, juries were responsible for deciding the meaning of the words used in patent claims. Opposing results in cases with similar facts were common, and a perception arose that the outcome of such trials was somewhat arbitrary. In Markman, the Court held that judges, not juries, would evaluate and decide the meaning of the words used in patent claims. Judges were to look at four sources for definitions, in order of priority: the written description accompanying the patent claims is most relevant; the documentation of the history of the patent as it went through the application; standard dictionaries of English; finally, if all else fails, expert testimony from experts "skilled in the art" at issue. This case has had a significant impact on the patent litigation process in the United States. Many jurisdict
https://en.wikipedia.org/wiki/Milky%20seas%20effect
Milky seas (Somali: Kaluunka iftiima; English: Milky seas), also called mareel, is a luminous phenomenon in the ocean in which large areas of seawater (up to ) appear to glow translucently (in varying shades of blue). Such occurrences glow brightly enough at night to be visible from satellites orbiting Earth. Mariners and other seafarers have reported that the ocean often emits a visible glow which extends for miles at night. In 2005, scientists announced that for the first time, they had obtained photographic evidence of this glow. It is most likely caused by bioluminescence. Effect Between 1915 and 1993, 235 sightings of milky seas were documented, most of which are concentrated in the northwestern Indian Ocean near to Somalia. The luminescent glow is concentrated on the surface of the ocean and does not mix evenly throughout the water column. In 1985, a research vessel in the Arabian Sea took water samples during milky seas. Their conclusions were that the effect was caused by the bacterium Vibrio harveyi. Mareel is typically caused by Noctiluca scintillans (popularly known as "sea sparkle"), a dinoflagellate that glows when disturbed and is found in oceans throughout much of the world. In July 2015, at Alleppey, Kerala, India, the phenomenon occurred and the National Institute of Oceanography and Kerala Fisheries Department researched it, finding that the glittering waves were the result of Noctiluca scintillans. In 2005, Steven Miller of the Naval Research Laboratory in Monterey, California, was able to match 1995 satellite images with a first-hand account of a merchant ship. U.S. Defense Meteorological Satellite Program showed the milky area to be approximately (roughly the size of Connecticut). The luminescent field was observed to glow over three consecutive nights. While monochromatic photos make this effect appear white, Monterey Bay Aquarium Research Institute scientist Steven Haddock (an author of a milky seas effect study) has commented, "the lig
https://en.wikipedia.org/wiki/Olecranon
The olecranon (, ), is a large, thick, curved bony eminence of the ulna, a long bone in the forearm that projects behind the elbow. It forms the most pointed portion of the elbow and is opposite to the cubital fossa or elbow pit. The olecranon serves as a lever for the extensor muscles that straighten the elbow joint. Structure The olecranon is situated at the proximal end of the ulna, one of the two bones in the forearm. When the hand faces forward (supination) the olecranon faces towards the back (posteriorly). It is bent forward at the summit so as to present a prominent lip which is received into the olecranon fossa of the humerus during extension of the forearm. Its base is contracted where it joins the body and the narrowest part of the upper end of the ulna. Its posterior surface, directed backward, is triangular, smooth, subcutaneous, and covered by a bursa. Its superior surface is of quadrilateral form, marked behind by a rough impression for the insertion of the Triceps brachii; and in front, near the margin, by a slight transverse groove for the attachment of part of the posterior ligament of the elbow-joint. Its anterior surface is smooth, concave, and forms the upper part of the semilunar notch. Its borders present continuations of the groove on the margin of the superior surface; they serve for the attachment of ligaments, viz., the back part of the ulnar collateral ligament medially, and the posterior ligament laterally. From the medial border a part of the flexor carpi ulnaris arises; while to the lateral border the anconeus muscle is attached. Clinical significance Fractures of the olecranon are common injuries. An olecranon fracture with anterior displacement of the radial head is called a Hume fracture. Etymology The word "olecranon" comes from the Greek olene, meaning elbow, and kranon, meaning head. Additional images See also Olecranon bursitis Olecranon fossa
https://en.wikipedia.org/wiki/Speech%20Application%20Language%20Tags
Speech Application Language Tags (SALT) is an XML-based markup language that is used in HTML and XHTML pages to add voice recognition capabilities to web-based applications. Description Speech Application Language Tags enables multimodal and telephony-enabled access to information, applications, and Web services from PCs, telephones, tablet PCs, and wireless personal digital assistants (PDAs). The Speech Application Language Tags extend existing mark-up languages such as HTML, XHTML, and XML. Multimodal access will enable users to interact with an application in a variety of ways: they will be able to input data using speech, a keyboard, keypad, mouse and/or stylus, and produce data as synthesized speech, audio, plain text, motion video, and/or graphics. History SALT was developed as a competitor to VoiceXML and was supported by the SALT Forum. The SALT Forum was founded on October 15, 2001, by Microsoft, along with Cisco Systems, Comverse, Intel, Philips Consumer Electronics, and ScanSoft. The SALT 1.0 specification was submitted to the W3C (World Wide Web Consortium) for review in August 2002. However, the W3C continued developing its VoiceXML 2.0 standard, which reached the final "Recommendation" stage in March 2004. By 2006, Microsoft realized Speech Server had to support the W3C VoiceXML standard to remain competitive. Microsoft joined the VoiceXML Forum as a Promoter in April of that year. Speech Server 2007 supports VoiceXML 2.0 and 2.1 in addition to SALT. In 2007, Microsoft purchased Tellme, one of the largest VoiceXML service providers. By that point nearly every other SALT Forum company had committed to VoiceXML. The last press release posted to the SALT Forum website was in 2003, while the VoiceXML Forum is quite active. "SALT [Speech Application Language Tags] is a direct competitor but has not reached the level of maturity of VoiceXML in the standards process," said Bill Meisel, principal at TMA Associates, a speech technology research firm. Us
https://en.wikipedia.org/wiki/Separable%20state
In quantum mechanics, separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. The physical intuition behind these definitions is that product states have no correlation between the different degrees of freedom, while separable states might have correlations, but all such correlations can be explained as due to a classical random variable, as opposed as being due to entanglement. In the special case of pure states the definition simplifies: a pure state is separable if and only if it is a product state. A state is said to be entangled if it is not separable. In general, determining if a state is separable is not straightforward and the problem is classed as NP-hard. Separability of bipartite systems Consider first composite states with two degrees of freedom, referred to as bipartite states. By a postulate of quantum mechanics these can be described as vectors in the tensor product space . In this discussion we will focus on the case of the Hilbert spaces and being finite-dimensional. Pure states Let and be orthonormal bases for and , respectively. A basis for is then , or in more compact notation . From the very definition of the tensor product, any vector of norm 1, i.e. a pure state of the composite system, can be written as where is a constant. If can be written as a simple tensor, that is, in the form with a pure state in the i-th space, it is said to be a product state, and, in particular, separable. Otherwise it is called entangled. Note that, even though the notions of product and separable states coincide for pure states, they do not in the more general case of mixed states. Pure states are entangled if and only if their partial states are not pure. To see this, write the Schmidt decomposition of as where are positive real numbers, is the Schmidt rank of , and and
https://en.wikipedia.org/wiki/Tank%20Battalion
is a multi-directional shooter arcade video game that was released by Namco in 1980. The only direct home conversion is for the MSX, although it was followed up by two sequels: Battle City for the Famicom in 1985 and Tank Force for arcades in 1991. Gameplay The player, controlling a tank, must destroy twenty enemy tanks in each round, which enter the playfield from the top of the screen. The enemy tanks attempt to destroy the player's base (represented on the map as an eagle) as well as the player tank itself. A round is cleared when the player destroys all twenty enemy tanks, but the game ends if the player's base is destroyed or they run out of lives. Reception Cash Box believed that "the real excitement" of Tank Battalion lied within its ability to modify the level design by destroying the brick walls. Retrospectively in 2015, a writer for Beep! enjoyed the Sord M5 version for its improvements over the arcade original, such as the smoother movement of the player's tank, but disliked the squashed-looking graphics and narrow playing space. While the writer believed the MSX version was superior, they still recommended the M5 version for Namco fans and collectors. Legacy A theme based on the game for Pac-Man 99 was released as free post-launch DLC, featuring visuals and sounds from the game. Notes
https://en.wikipedia.org/wiki/Peanut%20allergy
Peanut allergy is a type of food allergy to peanuts. It is different from tree nut allergies, because peanuts are legumes and not true nuts. Physical symptoms of allergic reaction can include itchiness, hives, swelling, eczema, sneezing, asthma attack, abdominal pain, drop in blood pressure, diarrhea, and cardiac arrest. Anaphylaxis may occur. Those with a history of asthma are more likely to be severely affected. It is due to a type I hypersensitivity reaction of the immune system in susceptible individuals. The allergy is recognized "as one of the most severe food allergies due to its prevalence, persistency, and potential severity of allergic reaction." Prevention may be partly achieved through early introduction of peanuts to the diets of pregnant women and babies. It is recommended that babies at high risk be given peanut products in areas where medical care is available as early as 4 months of age. The principal treatment for anaphylaxis is the injection of epinephrine. In the United States, peanut allergy is present in 0.6% of the population. Among children in the Western world, rates are between 1.5% and 3% and have increased over time. It is a common cause of food-related fatal and near-fatal allergic reactions. Signs and symptoms Most symptoms of peanut allergy are related to the action of immunoglobulin E (IgE) and other anaphylatoxins which act to release histamine and other mediator substances from mast cells (degranulation). In addition to other effects, histamine induces vasodilation of arterioles and constriction of bronchioles in the lungs, also known as bronchospasm. Symptoms can also include mild itchiness, hives, angioedema, facial swelling, rhinitis, vomiting, diarrhea, acute abdominal pain, exacerbation of atopic eczema, asthma, and cardiac arrest. Anaphylaxis may occur. Cross-reactivity with other food allergies People with confirmed peanut allergy may have cross-reactivity to tree nut, soy, and other legumes, such as peas and lentils an
https://en.wikipedia.org/wiki/KMS%20state
In the statistical mechanics of quantum mechanical systems and quantum field theory, the properties of a system in thermal equilibrium can be described by a mathematical object called a Kubo–Martin–Schwinger (KMS) state: a state satisfying the KMS condition. Ryogo Kubo introduced the condition in 1957, and Julian Schwinger used it in 1959 to define thermodynamic Green's functions, and Rudolf Haag, Marinus Winnink and Nico Hugenholtz used the condition in 1967 to define equilibrium states and called it the KMS condition. Overview The simplest case to study is that of a finite-dimensional Hilbert space, in which one does not encounter complications like phase transitions or spontaneous symmetry breaking. The density matrix of a thermal state is given by where H is the Hamiltonian operator and N is the particle number operator (or charge operator, if we wish to be more general) and is the partition function. We assume that N commutes with H, or in other words, that particle number is conserved. In the Heisenberg picture, the density matrix does not change with time, but the operators are time-dependent. In particular, translating an operator A by τ into the future gives the operator . A combination of time translation with an internal symmetry "rotation" gives the more general A bit of algebraic manipulation shows that the expected values for any two operators A and B and any real τ (we are working with finite-dimensional Hilbert spaces after all). We used the fact that the density matrix commutes with any function of (H − μN) and that the trace is cyclic. As hinted at earlier, with infinite dimensional Hilbert spaces, we run into a lot of problems like phase transitions, spontaneous symmetry breaking, operators that are not trace class, divergent partition functions, etc.. The complex functions of z, converges in the complex strip whereas converges in the complex strip if we make certain technical assumptions like the spectrum of H − μN is bounded f
https://en.wikipedia.org/wiki/Features%2C%20events%2C%20and%20processes
Features, Events, and Processes (FEP) are terms used in the fields of radioactive waste management, carbon capture and storage, and hydraulic fracturing to define relevant scenarios for safety assessment studies. For a radioactive waste repository, features would include the characteristics of the site, such as the type of soil or geological formation the repository is to be built on or under. Events would include things that may or will occur in the future, like, e.g., glaciations, droughts, earthquakes, or formation of faults. Processes are things that are ongoing, such as the erosion or subsidence of the landform where the site is located on, or near. Several catalogues of FEP's are publicly available, a.o., this one elaborated for the NEA Clay Club dealing with the disposal of radioactive waste in deep clay formations, and those compiled for deep crystalline rocks (granite) by Svensk Kärnbränslehantering AB, SKB, the Swedish Nuclear Fuel and Waste Management Company.
https://en.wikipedia.org/wiki/Major%20basic%20protein
Eosinophil major basic protein, often shortened to major basic protein (MBP; also called Proteoglycan 2 (PRG2)) is encoded in humans by the PRG2 gene. Function The protein encoded by this gene is the predominant constituent of the crystalline core of the eosinophil granule. High levels of the proform of this protein are also present in placenta and pregnancy serum, where it exists as a complex with several other proteins including pregnancy-associated plasma protein A (PAPPA), angiotensinogen (AGT), and C3dg. This protein may be involved in antiparasitic defense mechanisms as a cytotoxin and helmintho-toxin, and in immune hypersensitivity reactions. It is directly implicated in epithelial cell damage, exfoliation, and bronchospasm in allergic diseases. PRG2 is a 117-residue protein that predominates in eosinophil granules. It is a potent enzyme against helminths and is toxic towards bacteria and mammalian cells in vitro. The eosinophil major basic protein also causes the release of histamine from mast cells and basophils, and activates neutrophils and alveolar macrophages. Structure Structurally the major basic protein (MBP) is similar to lectins (sugar-binding proteins), and has a fold similar to that seen in C-type lectins. However, unlike other C-type lectins (those that bind various carbohydrates in the presence of calcium), MBP does not bind either calcium or any of the other carbohydrates that this family recognize. Instead, MBP recognises heparan sulfate proteoglycans. Two crystallographic structures of MBP have been determined. Interactions Major basic protein has been shown to interact with Pregnancy-associated plasma protein A. See also Arylsulfatase
https://en.wikipedia.org/wiki/Parasitic%20worm
Parasitic worms, also known as helminths, are large macroparasites; adults can generally be seen with the naked eye. Many are intestinal worms that are soil-transmitted and infect the gastrointestinal tract. Other parasitic worms such as schistosomes reside in blood vessels. Some parasitic worms, including leeches and monogeneans, are ectoparasites thus, they are not classified as helminths, which are endoparasites. Parasitic worms live in and feed in living hosts. They receive nourishment and protection while disrupting their hosts' ability to absorb nutrients. This can cause weakness and disease in the host, and poses a global health and economic problem. Parasitic worms cannot reproduce entirely within their host's body; they have a life cycle that includes some stages that need to take place outside of the host. Helminths are able to survive in their mammalian hosts for many years due to their ability to manipulate the host's immune response by secreting immunomodulatory products. All parasitic worms produce eggs during reproduction. These eggs have a strong shell that protects them against a range of environmental conditions. The eggs can therefore survive in the environment for many months or years. Many of the worms referred to as helminths are intestinal parasites. An infection by a helminth is known as helminthiasis, helminth infection, or intestinal worm infection. There is a naming convention which applies to all helminths: the ending "-asis" (or in veterinary science: "-osis") is added at the end of the name of the worm to denote the infection with that particular worm. For example, Ascaris is the name of a type of helminth, and ascariasis is the name of the infection caused by that helminth. Taxonomy Helminths are a group of organisms which share a similar form but are not necessarily related as part of evolution. The term "helminth" is an artificial term. There is no real consensus on the taxonomy (or groupings) of the helminths, particularly wi
https://en.wikipedia.org/wiki/Osmotic%20concentration
Osmotic concentration, formerly known as osmolarity, is the measure of solute concentration, defined as the number of osmoles (Osm) of solute per litre (L) of solution (osmol/L or Osm/L). The osmolarity of a solution is usually expressed as Osm/L (pronounced "osmolar"), in the same way that the molarity of a solution is expressed as "M" (pronounced "molar"). Whereas molarity measures the number of moles of solute per unit volume of solution, osmolarity measures the number of osmoles of solute particles per unit volume of solution. This value allows the measurement of the osmotic pressure of a solution and the determination of how the solvent will diffuse across a semipermeable membrane (osmosis) separating two solutions of different osmotic concentration. Unit The unit of osmotic concentration is the osmole. This is a non-SI unit of measurement that defines the number of moles of solute that contribute to the osmotic pressure of a solution. A milliosmole (mOsm) is 1/1,000 of an osmole. A microosmole (μOsm) (also spelled micro-osmole) is 1/1,000,000 of an osmole. Types of solutes Osmolarity is distinct from molarity because it measures osmoles of solute particles rather than moles of solute. The distinction arises because some compounds can dissociate in solution, whereas others cannot. Ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. For example, sodium chloride (NaCl) dissociates into Na+ and Cl− ions. Thus, for every 1 mole of NaCl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/L NaCl solution is a 2 osmol/L NaCl solution). Both sodium and chloride ions affect the osmotic pressure of the solution. Another example is magnesium chloride (MgCl2), which dissociates into Mg2+ and 2Cl− ions. For every 1 mole of MgCl2 in the solution, there are 3 osmoles of solute particles. Nonionic compounds do not dissociate
https://en.wikipedia.org/wiki/Insilicos
Insilicos is a life science software company founded in 2002 by Erik Nilsson, Brian Pratt and Bryan Prazen. Insilicos develops scientific computing software to provide software for disease diagnoses. Technology Insilicos' key technologies includes pattern recognition techniques to interpret proteomics mass spectrometry data. Insilicos products include InsilicosViewer and Insilicos Proteomics Pipeline (IPP). These products support the mzXML, mzDATA and mzML file formats. In 2007, Insilicos received a grant from the National Human Genome Research Institute to further develop software allowing for studies to be conducted more quickly. The open-source software, developed in connection with the Institute for Systems Biology, has been referred to as the Trans Proteomic Pipeline. IPP is commercial version of the Trans-Proteomic Pipeline
https://en.wikipedia.org/wiki/Brow%20ridge
The brow ridge, or supraorbital ridge known as superciliary arch in medicine, is a bony ridge located above the eye sockets of all primates and some other animals. In humans, the eyebrows are located on their lower margin. Structure The brow ridge is a nodule or crest of bone situated on the frontal bone of the skull. It forms the separation between the forehead portion itself (the squama frontalis) and the roof of the eye sockets (the pars orbitalis). Normally, in humans, the ridges arch over each eye, offering mechanical protection. In other primates, the ridge is usually continuous and often straight rather than arched. The ridges are separated from the frontal eminences by a shallow groove. The ridges are most prominent medially, and are joined to one another by a smooth elevation named the glabella. Typically, the arches are more prominent in men than in women, and vary between different ethnic groups. Behind the ridges, deeper in the bone, are the frontal sinuses. Terminology The brow ridges, being a prominent part of the face in some ethnic groups and a trait linked to both atavism and sexual dimorphism, have a number of names in different disciplines. In vernacular English, the terms eyebrow bone or eyebrow ridge are common. The more technical terms frontal or supraorbital arch, ridge or torus (or tori to refer to the plural, as the ridge is usually seen as a pair) are often found in anthropological or archaeological studies. In medicine, the term arcus superciliaris (Latin) or the English translation superciliary arch. This feature is different from the supraorbital margin and the margin of the orbit. Some paleoanthropologists distinguish between frontal torus and supraorbital ridge. In anatomy, a torus is a projecting shelf of bone that unlike a ridge is rectilinear, unbroken and goes through glabella. Some fossil hominins, in this use of the word, have the frontal torus, but almost all modern humans only have the ridge. Development Spatial model Th
https://en.wikipedia.org/wiki/Gap%20analysis
In management literature, gap analysis involves the comparison of actual performance with potential or desired performance. If an organization does not make the best use of current resources, or forgoes investment in capital or technology, it may produce or perform below an idealized potential. This concept is similar to an economy's production being below the production possibilities frontier. Gap analysis identifies gaps between the optimized allocation and integration of the inputs (resources), and the current allocation-level. This reveals areas that can be improved. Gap analysis involves determining, documenting and improving the difference between business requirements and current capabilities. Gap analysis naturally flows from benchmarking and from other assessments. Once the general expectation of performance in an industry is understood, it is possible to compare that expectation with the company's current level of performance. This comparison becomes the gap analysis. Such analysis can be performed at the strategic or at the operational level of an organization. Gap analysis is a formal study of what a business is doing currently and where it wants to go in the future. It can be conducted, in different perspectives, as follows: Organization (e.g., Human Resources) Business direction Business processes Information technology Gap analysis provides a foundation for measuring investment of time, money and human resources required to achieve a particular outcome (e.g. to turn the salary payment process from paper-based to paperless with the use of a system). Note that "GAP analysis" has also been used as a means of classifying how well a product or solution meets a targeted need or set of requirements. In this case, "GAP" can be used as a ranking of "Good", "Average" or "Poor". (This terminology appears in the PRINCE2 project management publication.) Gap analysis and new products The need for new products or additions to existing lines may emerge from
https://en.wikipedia.org/wiki/Signal%20velocity
The signal velocity is the speed at which a wave carries information. It describes how quickly a message can be communicated (using any particular method) between two separated parties. No signal velocity can exceed the speed of a light pulse in a vacuum (by Special Relativity). Signal velocity is usually equal to group velocity (the speed of a short "pulse" or of a wave-packet's middle or "envelope"). However, in a few special cases (e.g., media designed to amplify the front-most parts of a pulse and then attenuate the back section of the pulse), group velocity can exceed the speed of light in vacuum, while the signal velocity will still be less than or equal to the speed of light in vacuum. In electronic circuits, signal velocity is one member of a group of five closely related parameters. In these circuits, signals are usually treated as operating in TEM (Transverse ElectroMagnetic) mode. That is, the fields are perpendicular to the direction of transmission and perpendicular to each other. Given this presumption, the quantities: signal velocity, the product of dielectric constant and magnetic permeability, characteristic impedance, inductance of a structure, and capacitance of that structure, are all related such that if you know any two, you can calculate the rest. In a uniform medium if the permeability is constant, then variation of the signal velocity will be dependent only on variation of the dielectric constant. In a transmission line, signal velocity is the reciprocal of the square root of the capacitance-inductance product, where inductance and capacitance are typically expressed as per-unit length. In circuit boards made of FR-4 material, the signal velocity is typically about six inches (15 cm) per nanosecond, or 6.562 ps/mm. In circuit boards made of Polyimide material, the signal velocity is typically about 16.3 cm per nanosecond or 6.146 ps/mm. In these boards, permeability is usually constant and dielectric constant often varies from locati
https://en.wikipedia.org/wiki/GameStorm
GameStorm was an online gaming service founded by Kesmai corporation in November 1997. It offered several online video games at a flat monthly fee of $10 per month, a relatively radical payment system in the age of pay-by-hour online gaming. Both Kesmai and GameStorm were sold to Electronic Arts in 1999, and shut down by Electronic Arts in 2001. GameStorm featured games developed by Kesmai, such as Air Warrior, Multiplayer Battletech: Solaris, Stellar Emperor and Legends of Kesmai, along with games developed by several other companies. Legends of Kesmai was the 2d graphical version of Kesmai's groundbreaking Islands of Kesmai MUD from 1985, and may be regarded as an important step in the genre leap from MUDs to MMORPGs. GameStorm's payment method was massively popular for the emerging persistent online gaming genres that rewarded players for time invested, but were too expensive for many people to pay $2/hour for on AOL or other gaming services. Mythic Entertainment (Now EA-Mythic), widely known for the Dark Age of Camelot MMORPG, was one of Gamestorms major developers. Mythic offered several licensed RPG and persistent non-RPG games to Gamestorm's library, including Dragon's Gate. Starship Troopers Online, Magestorm, Aliens Online, Splatterball, Godzilla Online, Silent Death Online, Darkness Falls, and Darkness Falls: The Crusade.
https://en.wikipedia.org/wiki/Richard%20Askey
Richard Allen Askey (4 June 1933 – 9 October 2019) was an American mathematician, known for his expertise in the area of special functions. The Askey–Wilson polynomials (introduced by him in 1984 together with James A. Wilson) are on the top level of the (-)Askey scheme, which organizes orthogonal polynomials of (-)hypergeometric type into a hierarchy. The Askey–Gasper inequality for Jacobi polynomials is essential in de Brange's famous proof of the Bieberbach conjecture. Askey earned a B.A. at Washington University in St. Louis in 1955, an M.A. at Harvard University in 1956, and a Ph.D. at Princeton University in 1961. After working as an instructor at Washington University (1958–1961) and University of Chicago (1961–1963), he joined the faculty of the University of Wisconsin–Madison in 1963 as an Assistant Professor of Mathematics. He became a full professor at Wisconsin in 1968, and since 2003 was a professor emeritus. Askey was a Guggenheim Fellow, 1969–1970, which academic year he spent at the Mathematisch Centrum in Amsterdam. In 1983, he gave an invited lecture at the International Congress of Mathematicians (ICM) in Warsaw. He was elected a Fellow of the American Academy of Arts and Sciences in 1993. In 1999, he was elected to the National Academy of Sciences. In 2009, he became a fellow of the Society for Industrial and Applied Mathematics (SIAM). In 2012, he became a fellow of the American Mathematical Society. In December 2012, he received an honorary doctorate from SASTRA University in Kumbakonam, India. Askey explained why hypergeometric functions appear so frequently in mathematical applications: "Riemann showed that the requirement that a differential equation have regular singular points at three given points and every other complex point is a regular point is so strong a restriction that (Riemann's) differential equation is the hypergeometric equation with the three singularities moved to the three given points. Differential equations with four or
https://en.wikipedia.org/wiki/Isotonic%20regression
In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing (or non-increasing) everywhere, and lies as close to the observations as possible. Applications Isotonic regression has applications in statistical inference. For example, one might use it to fit an isotonic curve to the means of some set of experimental results when an increase in those means according to some particular ordering is expected. A benefit of isotonic regression is that it is not constrained by any functional form, such as the linearity imposed by linear regression, as long as the function is monotonic increasing. Another application is nonmetric multidimensional scaling, where a low-dimensional embedding for data points is sought such that order of distances between points in the embedding matches order of dissimilarity between points. Isotonic regression is used iteratively to fit ideal distances to preserve relative dissimilarity order. Isotonic regression is also used in probabilistic classification to calibrate the predicted probabilities of supervised machine learning models. Isotonic regression for the simply ordered case with univariate has been applied to estimating continuous dose-response relationships in fields such as anesthesiology and toxicology. Narrowly speaking, isotonic regression only provides point estimates at observed values of Estimation of the complete dose-response curve without any additional assumptions is usually done via linear interpolation between the point estimates. Software for computing isotone (monotonic) regression has been developed for R, Stata, and Python. Problem statement and algorithms Let be a given set of observations, where the and the fall in some partially ordered set. For generality, each observation may be given a weight , although commonly for all . Isotonic regression seeks a weighte
https://en.wikipedia.org/wiki/Wool%20bale
A wool bale is a standard sized and weighted pack of classed wool compressed by the mechanical means of a wool press. This is the regulation required method of packaging for wool, to keep it uncontaminated and readily identifiable. A "bale of wool" is also the standard trading unit for wool on the wholesale national and international markets. The minimum weight of a bale is . Wool packs Packaging of wool has not changed much for centuries except that the early wool packs were made from jute, prior to the use of synthetic fibres. Jute packs were relatively heavy, weighing several kilograms each. In the 1960s polypropylene and high-density polyethylene packs were manufactured and used to make wool bales. Loose fibres from these packs caused contamination of the wool in the bale and led to nylon becoming the regulation fabric used in Australia. In South Africa woven paper was tested but discontinued in 1973 due to poor wet strength and high cost. Regulation standard white nylon packs now have a label sewn onto the top flap of the wool pack for inclusion of the farm brand, wool description, bale number, woolclasser stencil number and bin code. Each bale of wool packs contains 50 packs that measure x and have flaps. History Very early wool presses were made from wood boards and had a wire winch mechanism to compress the wool and also hollow logs where the wool was tramped into a pack. During the late 19th century various forms of wooden wool press became the standard. Most popular models were the Koerstz and the Ferrier. The Koerstz was a smaller press than the Ferrier. The Ferrier press was manufactured under license by Humble & Nicholson (later Humble & Sons), Geelong, Victoria, and they had sold 2,000 presses between about 1871 and 1918. These presses were distributed throughout Australia, but were also sent overseas to New Zealand, South America, and North Africa. The most popular wool press in New Zealand was the Donalds Wool Press which was manufactured
https://en.wikipedia.org/wiki/Invasive%20species%20of%20Australian%20origin
There are a number of Australian species that have become invasive when introduced into outside Australia or outside Oceania. Animals The Australian magpie was introduced into New Zealand and are considered to be a pest because of attacks on humans and a possible effect on the native bird population. The common brushtail possum was introduced to New Zealand to start a fur industry, and spread nationwide. With no natural controls they have severe impact by feeding on native plant species and also prey on native animals species as well. Possums have also been observed eating the eggs of nesting birds such as the kererū, tūī, and kākā. They are a carrier of tuberculosis which they spread to pasture and hence livestock. Control of them has been an ongoing project of regional councils, the Department of Conservation, Forest and Bird, and various other wildlife preservation organizations. Plants Melaleucas in the Everglades Perhaps the best known example of an Australian plant becoming an invasive species is the problematic introduction of Melaleuca quinquenervia into Florida. As with all Melaleuca species, M. quinquenervia seeds prolifically. In the absence of natural predators, it spread throughout southern Florida; at one time it was estimated that it had colonised 12% of southern Florida. The colonised area included a substantial proportion of the Everglades, an important national park and World Heritage Site. Attempts were made to control the spread by burning, but this only exacerbated the problem as it encouraged seed dispersal while failing to kill the trees. The spread of Melaleuca is now managed by a combination of regular herbicide treatment and the introduction of an Australian beetle as a biological control. Acacia in southern Africa A number of Acacia species have become serious environmental pests after being introduced into southern Africa. The most troublesome species are Acacia cyclops and Acacia saligna. Both are Western Australian coas
https://en.wikipedia.org/wiki/EIA-530
Currently known as TIA-530-A, but often called EIA-530, or RS-530, is a balanced serial interface standard that generally uses a 25-pin connector, originally created by the Telecommunications Industry Association. Finalized in 1987 (revision A finalized in 1992), the specification defines the cable between the DTE and DCE devices. It is to be used in conjunction with EIA-422 and EIA-423, which define the electrical signaling characteristics. Because TIA-530 calls for the more common 25 pin connector, it displaced the similar EIA-449, which also uses EIA-422/423, but a larger 37-pin connector. Two types of interchange circuits ("signals" or "leads") between the DCE and DTE are defined in TIA-530: Category I, which uses the balanced characteristics of EIA-422, and Category II, which is the unbalanced EIA-423. Most of the interchange circuits are Category I, with the exception of Local Loopback (pin 18), Remote Loopback (pin 21), and Test Mode (pin 25) being Category II. TIA-530 originally used Category I circuits for what is commonly called "Data Set Ready" (DCE Ready, pins 6 and 22) and "Data Terminal Ready" (DTE Ready, pins 20 and 23). Revision A changed these interchange circuits to Category II (para 4.3.6 and 4.3.7 of the standard) and added a "Ring Indicator" on pin 22. Pin 23 is grounded in TIA-530-A. Confusion between the revisions has led to many incorrect wiring diagrams of this interface and most manufacturers still adhere to the original TIA-530 standard. Care should be taken to ensure devices are of the same standard before connecting to avoid complications. Make note that the diagram shows pin 24 being "B" when it is actually "A".
https://en.wikipedia.org/wiki/Pittsburgh%20compound%20B
Pittsburgh compound B (PiB) is a radioactive analog of thioflavin T, which can be used in positron emission tomography scans to image beta-amyloid plaques in neuronal tissue. Due to this property, Pittsburgh compound B may be used in investigational studies of Alzheimer's disease. History The definitive diagnosis of Alzheimer's disease can only be made following the demonstration of the presence of beta-amyloid (Aβ) plaques and neurofibrillary tangles, the pathologic hallmarks of Alzheimer's disease in brain tissue, typically at autopsy. While the cognitive impairments of the disease could be monitored throughout the disease course, clinicians had no reliable way to monitor the pathologic progression of the disease. Due to this fact, a clear understanding of the process of amyloid deposition and how amyloid deposits relate to the cognitive symptoms of Alzheimer's disease remains to be elucidated. While sophisticated centers for the treatment of Alzheimer's disease are able to diagnose the disease with some reliability based on its clinical presentation, the differential diagnosis of Alzheimer's disease from other dementias is less robust. Furthermore, as novel disease-modifying therapies for Alzheimer's disease that attack and remove beta-amyloid deposits from the brain enter clinical trials, a pre-mortem tool for assessing their effectiveness at clearing the amyloid deposits was a much needed development. To answer these needs, a research team from the University of Pittsburgh led by geriatrics psychiatrist William E. Klunk and radiochemist Chester A. Mathis synthesised charge-neutral benzothiazoles derived from thioflavin T, which included a small number of compounds with suitable properties for use as a positron emission tomography imaging agent. One of these compounds, 2-(4'-[11C]methylaminophenyl)-6-hydroxybenzothiazole, was tested in human subjects. The University of Pittsburgh team partnered with a team of researchers from Uppsala University in Upps
https://en.wikipedia.org/wiki/Cassia%20gum
Cassia gum is the flour and food additives made from the endosperms of the seeds of Senna obtusifolia and Senna tora (also called Cassia obtusifolia or Cassia tora). It is composed of at least 75% polysaccharide, primarily galactomannan with a mannose:galactose ratio of 5:1, resulting in a high molecular mass of 200,000-300,000 Da. Approval Japan In 1995, cassia gum was added to the list of approved food additives in Japan by the Japanese Ministry of Health and Welfare. United States Two GRAS notices were filed to the U.S. Food and Drug Administration (FDA), one on June 23, 2000 (GRN 51) and one on November 21, 2003 (GRN 139), both of which were not evaluated due to notifier's request to cease evaluation. In June 2008, specialty firm Lubrizol Advanced Material filed a petition to the FDA proposing that food regulations be amended to provide for the use of cassia gum as a stabilizer in frozen dairy desserts. Approval in the US is still pending, with no clear indication of when it may be obtained. European Union In 2010, cassia gum received EU approval for human food applications. Uses It is used as a thickener and gelling agent, and has E-number E427 in food and E499 in feed (pet food).
https://en.wikipedia.org/wiki/Nastic%20movements
In biology, nastic movements are non-directional responses to stimuli (e.g. temperature, humidity, light irradiance), and are usually associated with plants. The movement can be due to changes in turgor (internal pressure within plant cells). Decrease in turgor pressure causes shrinkage, while increase in turgor pressure brings about swelling. Nastic movements differ from tropic movements in that the direction of tropic responses depends on the direction of the stimulus, whereas the direction of nastic movements is independent of the stimulus's position. The tropic movement is growth movement but nastic movement may or may not be growth movement. The rate or frequency of these responses increases as intensity of the stimulus increases. An example of such a response is the opening and closing of flowers (photonastic response), movement of euglena, chlamydomonas towards the source of light . They are named with the suffix "-nasty" and have prefixes that depend on the stimuli: Epinasty: downward-bending from growth at the top, for example, the bending down of a heavy flower. Hyponasty: upward bending of leaves from growth in the petiole (leaf stalk) Photonasty: response to light Nyctinasty: movements at night or in the dark Chemonasty: response to chemicals or nutrients Hydronasty: response to water Thermonasty: response to temperature Seismonasty: response to shock Geonasty/gravinasty: response to gravity Thigmonasty/seismonasty/haptonasty: response to contact The suffix may come from Greek νάσσω = "I press", ναστός = "pressed", ἐπιναστια = "the condition of being pressed upon". See also For other types of movement, see: Taxis Tropism Kinesis
https://en.wikipedia.org/wiki/Cookies%20and%20cream
Cookies and cream (or cookies 'n cream) is a variety of ice cream, milkshake and other desserts that includes chocolate sandwich cookies, with the most popular version containing hand or pre-crumbled cookies from Nabisco's Oreo brand under a licensing agreement. Cookies and cream ice cream generally mixes in crumbled chocolate sandwich cookies into vanilla ice cream, though variations exist which might instead use chocolate, coffee or mint ice cream. History There are competing claims as to who first invented and marketed cookies and cream ice cream. Malcolm Stogo, an ice cream consultant, claimed to have created the flavor in 1976, 1977 or 1978. South Dakota State University claims the flavor was invented at the university's dairy plant in 1979 by plant manager Shirley Seas and students Joe Leedom and Joe Van Treeck. In a 2005 press release, Blue Bell Creameries claimed they were the first company to mass-produce the flavor, in 1980. In 2006, The New York Times reported that Blue Bell made "no claim to have invented it but certainly pioneered the flavor." However, as of 2020, the company's website proclaimed, "We were first to create this innovative flavor." Blue Bell Creameries applied to register the trademark "Cookies 'n Cream" in 1981. John Harrison, the official taster for Dreyer's/Edy's Ice Cream, claims he invented it first for the company in 1982 Another claimant is Steve Herrell of Massachusetts' Herrell's Ice Cream. In 1983, cookies and cream became one of the top five best-selling flavors of ice cream. See also Hershey's Cookies 'n' Creme
https://en.wikipedia.org/wiki/XMLVend
XMLVend is a South African developed, open interface standard, which facilitates the sale of prepaid electricity credit between electricity utilities and clients. It is an application of web services to facilitate trade between various types of devices and a utility prepayment vending server. This standard is already being introduced and used in prepaid water. External links Eskom XMLVend implementation website Eskom Technical Documents Eskom Prepayment Johannesburg Water Web services
https://en.wikipedia.org/wiki/Cauchy%27s%20theorem%20%28group%20theory%29
In mathematics, specifically group theory, Cauchy's theorem states that if is a finite group and is a prime number dividing the order of (the number of elements in ), then contains an element of order . That is, there is in such that is the smallest positive integer with = , where is the identity element of . It is named after Augustin-Louis Cauchy, who discovered it in 1845. The theorem is related to Lagrange's theorem, which states that the order of any subgroup of a finite group divides the order of . Cauchy's theorem implies that for any prime divisor of the order of , there is a subgroup of whose order is —the cyclic group generated by the element in Cauchy's theorem. Cauchy's theorem is generalized by Sylow's first theorem, which implies that if is the maximal power of dividing the order of , then has a subgroup of order (and using the fact that a -group is solvable, one can show that has subgroups of order for any less than or equal to ). Statement and proof Many texts prove the theorem with the use of strong induction and the class equation, though considerably less machinery is required to prove the theorem in the abelian case. One can also invoke group actions for the proof. Proof 1 We first prove the special case that where is abelian, and then the general case; both proofs are by induction on  = ||, and have as starting case  =  which is trivial because any non-identity element now has order . Suppose first that is abelian. Take any non-identity element , and let be the cyclic group it generates. If divides ||, then ||/ is an element of order . If does not divide ||, then it divides the order [:] of the quotient group /, which therefore contains an element of order by the inductive hypothesis. That element is a class for some in , and if is the order of in , then  =  in gives () =  in /, so divides ; as before / is now an element of order in , completing the proof for the abelian case. In the general case, let be th
https://en.wikipedia.org/wiki/Calculus%20on%20Manifolds%20%28book%29
Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (1965) by Michael Spivak is a brief, rigorous, and modern textbook of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates. Description Calculus on Manifolds is a brief monograph on the theory of vector-valued functions of several real variables (f : Rn→Rm) and differentiable manifolds in Euclidean space. In addition to extending the concepts of differentiation (including the inverse and implicit function theorems) and Riemann integration (including Fubini's theorem) to functions of several variables, the book treats the classical theorems of vector calculus, including those of Cauchy–Green, Ostrogradsky–Gauss (divergence theorem), and Kelvin–Stokes, in the language of differential forms on differentiable manifolds embedded in Euclidean space, and as corollaries of the generalized Stokes theorem on manifolds-with-boundary. The book culminates with the statement and proof of this vast and abstract modern generalization of several classical results: The cover of Calculus on Manifolds features snippets of a July 2, 1850 letter from Lord Kelvin to Sir George Stokes containing the first disclosure of the classical Stokes' theorem (i.e., the Kelvin–Stokes theorem). Reception Calculus on Manifolds aims to present the topics of multivariable and vector calculus in the manner in which they are seen by a modern working mathematician, yet simply and selectively enough to be understood by undergraduate students whose previous coursework in mathematics comprises only one-variable calculus and introductory linear algebra. While Spivak's elementary treatment of modern mathematical tools is broadly successful—and this approach has made Calculus on Manifolds a standard introduction to the rigorous theory of multivariable calculus—the text is also well known for its laconic style, lack of motivating examples, and frequent omission of non-obvious ste
https://en.wikipedia.org/wiki/Megadiverse%20countries
A megadiverse country is one of a group of nations that harbours the majority of Earth's species and high numbers of endemic species. Conservation International identified 17 megadiverse countries in 1998. Many of them are located at least partially in tropical or subtropical regions. Megadiversity means exhibiting great biodiversity. The main criterion for megadiverse countries is endemism at the level of species, genera and families. A megadiverse country must have at least 5,000 species of endemic plants and must border marine ecosystems. In 2002, Mexico formed a separate organization focusing on Like-Minded Megadiverse Countries, consisting of countries rich in biological diversity and associated traditional knowledge. This organization includes all but three megadiverse countries as identified by Conservation International. List of megadiverse countries In alphabetical order, the 17 megadiverse countries are: List of most biodiverse countries 2022 Cancún initiative and declaration of like-minded megadiverse countries On 18 February 2002, the Ministers in charge of the Environment and the Delegates of Brazil, China, Colombia, Costa Rica, India, Indonesia, Kenya, Mexico, Peru, the Philippines, South Africa and Venezuela assembled in Cancún, Mexico. These countries declared to set up a Group of Like-Minded Megadiverse Countries as a mechanism for consultation and cooperation so that their interests and priorities, related to the preservation and sustainable use of biological diversity, could be promoted. They also declared that they would call on those countries that had not become Parties to the Convention on Biological Diversity, the Cartagena Protocol on Biosafety, and the Kyoto Protocol on climate change to become parties to these agreements. At the same time, they agreed to meet periodically, at the ministerial and expert levels, and decided that upon the conclusion of each annual Ministerial Meeting, the next rotating host country would take on the r
https://en.wikipedia.org/wiki/Flora%20%28microbiology%29
In microbiology, collective bacteria and other microorganisms in a host are historically known as flora. Although microflora is commonly used, the term microbiota is becoming more common as microflora is a misnomer. Flora pertains to the Kingdom Plantae. Microbiota includes Archaea, Bacteria, Fungi and Protists. Microbiota with animal-like characteristics can be classified as microfauna. History The terms "Flora" and "Fauna" were first used by Carl Linnaeus from Sweden in the title of his 1745 work Flora Suecica and Fauna Suecica. At that time, biology was focused on macroorganisms. Later, with the advent of microscopy, the new discovered ubiquitous microorganisms were fit in this system. Then, Fauna included moving organisms (animals and protist as "micro-fauna") and Flora the organisms with apparent no movement (plants/fungi; and bacteria as "microflora"). The terms "microfauna" and "microflora" are common in old books, but recently they have been replaced by the more adequate term "microbiota". Microbiota includes Archaea, Bacteria, Fungi and Protists. Microflora classification Microflora are grouped into two categories based on the origin of the microorganism. Autochthonous flora. - Bacteria and microorganisms native to the host environment Allochthonous flora. - Temporary microorganisms non-native to the host environment Roles Microflora is a term that refers to a community of bacteria that exist on or inside the body, and possess a unique ecological relationship with the host. This relationship encompasses a wide variety of microorganisms and the interactions between microbes. These interactions are often a mutualistic relationships between the host and autochthonous flora. Microflora responsible for harmful diseases are often allochthonous flora. The modern term is "Microbiome" and include microorganisms that have different roles in ecosystems or hosts, including free-living organisms, or organisms associated to hosts, such animals (including humans)
https://en.wikipedia.org/wiki/MTOR
The mammalian target of rapamycin (mTOR), also referred to as the mechanistic target of rapamycin, and sometimes called FK506-binding protein 12-rapamycin-associated protein 1 (FRAP1), is a kinase that in humans is encoded by the MTOR gene. mTOR is a member of the phosphatidylinositol 3-kinase-related kinase family of protein kinases. mTOR links with other proteins and serves as a core component of two distinct protein complexes, mTOR complex 1 and mTOR complex 2, which regulate different cellular processes. In particular, as a core component of both complexes, mTOR functions as a serine/threonine protein kinase that regulates cell growth, cell proliferation, cell motility, cell survival, protein synthesis, autophagy, and transcription. As a core component of mTORC2, mTOR also functions as a tyrosine protein kinase that promotes the activation of insulin receptors and insulin-like growth factor 1 receptors. mTORC2 has also been implicated in the control and maintenance of the actin cytoskeleton. Discovery Rapa Nui (Easter Island - Chile) The study of TOR originated in the 1960s with an expedition to Easter Island (known by the island inhabitants as Rapa Nui), with the goal of identifying natural products from plants and soil with possible therapeutic potential. In 1972, Suren Sehgal identified a small molecule, from a soil bacterium Streptomyces hygroscopicus, that he purified and initially reported to possess potent antifungal activity. He appropriately named it rapamycin, noting its original source and activity (Sehgal et al., 1975). However, early testing revealed that rapamycin also had potent immunosuppressive and cytostatic anti-cancer activity. Rapamycin did not initially receive significant interest from the pharmaceutical industry until the 1980s, when Wyeth-Ayerst supported Sehgal's efforts to further investigate rapamycin's effect on the immune system. This eventually led to its FDA approval as an immunosuppressant following kidney transplantation. H
https://en.wikipedia.org/wiki/Cassie%27s%20law
Cassie's law, or the Cassie equation, describes the effective contact angle θc for a liquid on a chemically heterogeneous surface, i.e. the surface of a composite material consisting of different chemistries, that is non uniform throughout. Contact angles are important as they quantify a surface's wettability, the nature of solid-fluid intermolecular interactions. Cassie's law is reserved for when a liquid completely covers both smooth and rough heterogeneous surfaces. More of a rule than a law, the formula found in literature for two materials is; where and are the contact angles for components 1 with fractional surface area , and 2 with fractional surface area in the composite material respectively. If there exist more than two materials then the equation is scaled to the general form of; , with . Cassie-Baxter Cassie's law takes on special meaning when the heterogeneous surface is a porous medium. now represents the solid surface area and air gaps, such that the surface is no longer completely wet. Air creates a contact angle of and because = , the equation reduces to: , which is the Cassie-Baxter equation. Unfortunately the terms Cassie and Cassie-Baxter are often used interchangeably but they should not be confused. The Cassie-Baxter equation is more common in nature, and focuses on the 'incomplete coating''' of surfaces by a liquid only. In the Cassie-Baxter state liquids sit upon asperities, resulting in air pockets that are bounded between the surface and liquid. Homogeneous surfaces The Cassie-Baxter equation is not restricted to only chemically heterogeneous surfaces, as air within porous homogeneous surfaces will make the system heterogeneous. However, if the liquid penetrates the grooves, the surface returns to homogeneity and neither of the previous equations can be used. In this case the liquid is in the Wenzel state, governed by a separate equation. Transitions between the Cassie-Baxter state and the Wenzel state can take place when exter
https://en.wikipedia.org/wiki/Output%20compare
Output compare is the ability to trigger an output based on a timestamp in memory, without interrupting the execution of code by a processor or microcontroller. This is a functionality provided by many embedded systems. The corresponding ability to record a timestamp in memory when an input occurs is called input capture. Embedded systems Microchip Documentation on Output Compare: DS39706A-page 16-1 - Section 16. Output Compare http://ww1.microchip.com/downloads/en/DeviceDoc/39706a.pdf
https://en.wikipedia.org/wiki/Generic%20Array%20Logic
The Generic Array Logic (also known as GAL and sometimes as gate array logic) device was an innovation of the PAL and was invented by Lattice Semiconductor. The GAL was an improvement on the PAL because one device type was able to take the place of many PAL device types or could even have functionality not covered by the original range of PAL devices. Its primary benefit, however, was that it was erasable and re-programmable, making prototyping and design changes easier for engineers. A similar device called a PEEL (programmable electrically erasable logic) was introduced by the International CMOS Technology (ICT) corporation. See also Programmable logic device (PLD) Complex programmable logic device (CPLD) Erasable programmable logic device (EPLD) GAL22V10
https://en.wikipedia.org/wiki/Television%20station
A television station is a set of equipment managed by a business, organisation or other entity such as an amateur television (ATV) operator, that transmits video content and audio content via radio waves directly from a transmitter on the earth's surface to any number of tuned receivers simultaneously. Overview Most often the term "television station" refers to a station which broadcasts structured content to an audience or it refers to the organization that operates the station. A terrestrial television transmission can occur via analog television signals or, more recently, via digital television signals. Television stations are differentiated from cable television or other video providers as their content is broadcast via terrestrial radio waves. A group of television stations with common ownership or affiliation are known as a TV network and an individual station within the network is referred to as O&O or affiliate, respectively. Because television station signals use the electromagnetic spectrum, which in the past has been a common, scarce resource, governments often claim authority to regulate them. Broadcast television systems standards vary around the world. Television stations broadcasting over an analog system were typically limited to one television channel, but digital television enables broadcasting via subchannels as well. Television stations usually require a broadcast license from a government agency which sets the requirements and limitations on the station. In the United States, for example, a television license defines the broadcast range, or geographic area, that the station is limited to, allocates the broadcast frequency of the radio spectrum for that station's transmissions, sets limits on what types of television programs can be programmed for broadcast and requires a station to broadcast a minimum amount of certain programs types, such as public affairs messages. Another form of television station is non-commercial educational (NCE) and
https://en.wikipedia.org/wiki/Adaptive%20software%20development
Adaptive software development (ASD) is a software development process that grew out of the work by Jim Highsmith and Sam Bayer on rapid application development (RAD). It embodies the principle that continuous adaptation of the process to the work at hand is the normal state of affairs. Adaptive software development replaces the traditional waterfall cycle with a repeating series of speculate, collaborate, and learn cycles. This dynamic cycle provides for continuous learning and adaptation to the emergent state of the project. The characteristics of an ASD life cycle are that it is mission focused, feature based, iterative, timeboxed, risk driven, and change tolerant. As with RAD, ASD is also an antecedent to agile software development. The word speculate refers to the paradox of planning – it is more likely to assume that all stakeholders are comparably wrong for certain aspects of the project’s mission, while trying to define it. During speculation, the project is initiated and adaptive cycle planning is conducted. Adaptive cycle planning uses project initiation information—the customer’s mission statement, project constraints (e.g., delivery dates or user descriptions), and basic requirements—to define the set of release cycles (software increments) that will be required for the project. Collaboration refers to the efforts for balancing the work based on predictable parts of the environment (planning and guiding them) and adapting to the uncertain surrounding mix of changes caused by various factors, such as technology, requirements, stakeholders, software vendors. The learning cycles, challenging all stakeholders, are based on the short iterations with design, build and testing. During these iterations the knowledge is gathered by making small mistakes based on false assumptions and correcting those mistakes, thus leading to greater experience and eventually mastery in the problem domain.
https://en.wikipedia.org/wiki/Locus%20%28genetics%29
In genetics, a locus (: loci) is a specific, fixed position on a chromosome where a particular gene or genetic marker is located. Each chromosome carries many genes, with each gene occupying a different position or locus; in humans, the total number of protein-coding genes in a complete haploid set of 23 chromosomes is estimated at 19,000–20,000. Genes may possess multiple variants known as alleles, and an allele may also be said to reside at a particular locus. Diploid and polyploid cells whose chromosomes have the same allele at a given locus are called homozygous with respect to that locus, while those that have different alleles at a given locus are called heterozygous. The ordered list of loci known for a particular genome is called a gene map. Gene mapping is the process of determining the specific locus or loci responsible for producing a particular phenotype or biological trait. Association mapping, also known as "linkage disequilibrium mapping", is a method of mapping quantitative trait loci (QTLs) that takes advantage of historic linkage disequilibrium to link phenotypes (observable characteristics) to genotypes (the genetic constitution of organisms), uncovering genetic associations. Nomenclature The shorter arm of a chromosome is termed the p arm or p-arm, while the longer arm is the q arm or q-arm. The chromosomal locus of a typical gene, for example, might be written 3p22.1, where: 3 = chromosome 3 p = p-arm 22 = region 2, band 2 (read as "two, two", not "twenty-two") 1 = sub-band 1 Thus the entire locus of the example above would be read as "three P two two point one". The cytogenetic bands are areas of the chromosome either rich in actively-transcribed DNA (euchromatin) or packaged DNA (heterochromatin). They appear differently upon staining (for example, euchromatin appears white and heterochromatin appears black on Giemsa staining). They are counted from the centromere out toward the telomeres. A range of loci is specified in a similar wa
https://en.wikipedia.org/wiki/Computer-assisted%20proof
A computer-assisted proof is a mathematical proof that has been at least partially generated by computer. Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program. Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using automated reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion. Methods One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say . In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacin
https://en.wikipedia.org/wiki/Object%20pool%20pattern
The object pool pattern is a software creational design pattern that uses a set of initialized objects kept ready to use – a "pool" – rather than allocating and destroying them on demand. A client of the pool will request an object from the pool and perform operations on the returned object. When the client has finished, it returns the object to the pool rather than destroying it; this can be done manually or automatically. Object pools are primarily used for performance: in some circumstances, object pools significantly improve performance. Object pools complicate object lifetime, as objects obtained from and returned to a pool are not actually created or destroyed at this time, and thus require care in implementation. Description When it is necessary to work with numerous objects that are particularly expensive to instantiate and each object is only needed for a short period of time, the performance of an entire application may be adversely affected. An object pool design pattern may be deemed desirable in cases such as these. The object pool design pattern creates a set of objects that may be reused. When a new object is needed, it is requested from the pool. If a previously prepared object is available, it is returned immediately, avoiding the instantiation cost. If no objects are present in the pool, a new item is created and returned. When the object has been used and is no longer needed, it is returned to the pool, allowing it to be used again in the future without repeating the computationally expensive instantiation process. It is important to note that once an object has been used and returned, existing references will become invalid. In some object pools the resources are limited, so a maximum number of objects is specified. If this number is reached and a new item is requested, an exception may be thrown, or the thread will be blocked until an object is released back into the pool. The object pool design pattern is used in several places in the sta
https://en.wikipedia.org/wiki/Miredo
Miredo is a Teredo tunneling client designed to allow full IPv6 connectivity to computer systems which are on the IPv4-based Internet but which have no direct native connection to an IPv6 network. Miredo is included in many Linux and BSD distributions and is also available for recent versions of Mac OS X. (Discontinued) It includes working implementations of: a Teredo client a Teredo relay a Teredo server Released under the terms of the GNU General Public License, Miredo is free software. See also
https://en.wikipedia.org/wiki/Certified%20Professional%20Electrologist
Certified Professional Electrologist (CPE) credential signifies that an electrologist's knowledge has been tested and measured against a national standard of excellence. The credential was developed and is administered through the American Electrology Association's International Board of Electrologist Certification (IBEC). The CPE must obtain seventy-five hours of continuing education, in a five-year period, to maintain this voluntary credential, or be re-tested. External links Why Choose A Certified Professional Electrologist Hair removal
https://en.wikipedia.org/wiki/Holomorph%20%28mathematics%29
In mathematics, especially in the area of algebra known as group theory, the holomorph of a group is a group that simultaneously contains (copies of) the group and its automorphism group. The holomorph provides interesting examples of groups, and allows one to treat group elements and group automorphisms in a uniform context. In group theory, for a group , the holomorph of denoted can be described as a semidirect product or as a permutation group. Hol(G) as a semidirect product If is the automorphism group of then where the multiplication is given by [Eq. 1] Typically, a semidirect product is given in the form where and are groups and is a homomorphism and where the multiplication of elements in the semidirect product is given as which is well defined, since and therefore . For the holomorph, and is the identity map, as such we suppress writing explicitly in the multiplication given in [Eq. 1] above. For example, the cyclic group of order 3 where with the multiplication given by: where the exponents of are taken mod 3 and those of mod 2. Observe, for example and this group is not abelian, as , so that is a non-abelian group of order 6, which, by basic group theory, must be isomorphic to the symmetric group . Hol(G) as a permutation group A group G acts naturally on itself by left and right multiplication, each giving rise to a homomorphism from G into the symmetric group on the underlying set of G. One homomorphism is defined as λ: G → Sym(G), (h) = g·h. That is, g is mapped to the permutation obtained by left-multiplying each element of G by g. Similarly, a second homomorphism ρ: G → Sym(G) is defined by (h) = h·g−1, where the inverse ensures that (k) = ((k)). These homomorphisms are called the left and right regular representations of G. Each homomorphism is injective, a fact referred to as Cayley's theorem. For example, if G = C3 = {1, x, x2 } is a cyclic group of order three, then (1) = x·1 = x, (x) = x·x = x2, and (x2)
https://en.wikipedia.org/wiki/Automatic%20vectorization
Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specialized supercomputers, typically have vector operations that simultaneously perform operations such as the following four additions (via SIMD or SPMD hardware): However, in most programming languages one typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written in C: for (i = 0; i < n; i++) c[i] = a[i] + b[i]; A vectorizing compiler transforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arrays a, b and c. Automatic vectorization is a major research topic in computer science. Background Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time. Computer languages and programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations. Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented in Intel's MMX, SSE, and AVX, in Power ISA's AltiVec, and in ARM's NEON, SVE and SVE2 instruction sets. Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because of pipeline synchronization or data-movement timing. Loop dependence analysis identifies loops tha
https://en.wikipedia.org/wiki/Generalized%20geography
In computational complexity theory, generalized geography is a well-known PSPACE-complete problem. Introduction Geography is a children's game, where players take turns naming cities from anywhere in the world. Each city chosen must begin with the same letter that ended the previous city name. Repetition is not allowed. The game begins with an arbitrary starting city and ends when a player loses because he or she is unable to continue. Graph model To visualize the game, a directed graph can be constructed whose nodes are each cities of the world. An arrow is added from node N1 to node N2 if and only if the city labeling N2 starts with the letter that ending the name of the city labeling node N1. In other words, we draw an arrow from one city to another if the first can lead to the second according to the game rules. Each alternate edge in the directed graph corresponds to each player (for a two player game). The first player unable to extend the path loses. An illustration of the game (containing some cities in Michigan) is shown in the figure below. In a generalized geography (GG) game, we replace the graph of city names with an arbitrary directed graph. The following graph is an example of a generalized geography game. Playing the game We define P1 as the player moving first and P2 as the player moving second and name the nodes N1 to Nn. In the above figure, P1 has a winning strategy as follows: N1 points only to nodes N2 and N3. Thus P1's first move must be one of these two choices. P1 chooses N2 (if P1 chooses N3, then P2 will choose N9 as that is the only option and P1 will lose). Next P2 chooses N4 because it is the only remaining choice. P1 now chooses N5 and P2 subsequently chooses N3 or N7. Regardless of P2's choice, P1 chooses N9 and P2 has no remaining choices and loses the game. Computational complexity The problem of determining which player has a winning strategy in a generalized geography game is PSPACE-complete. Generalized geography is
https://en.wikipedia.org/wiki/Long%20delayed%20echo
Long delayed echoes (LDEs) are radio echoes which return to the sender several seconds after a radio transmission has occurred. Delays of longer than 2.7 seconds are considered LDEs. LDEs have a number of proposed scientific origins. History These echoes were first observed in 1927 by civil engineer and amateur radio operator Jørgen Hals from his home near Oslo, Norway. Hals had repeatedly observed an unexpected second radio echo with a significant time delay after the primary radio echo ended. Unable to account for this strange phenomenon, he wrote a letter to Norwegian physicist Carl Størmer, explaining the event: At the end of the summer of 1927 I repeatedly heard signals from the Dutch short-wave transmitting station PCJJ at Eindhoven. At the same time as I heard these I also heard echoes. I heard the usual echo which goes round the Earth with an interval of about 1/7 of a second as well as a weaker echo about three seconds after the principal echo had gone. When the principal signal was especially strong, I suppose the amplitude for the last echo three seconds later, lay between 1/10 and 1/20 of the principal signal in strength. From where this echo comes I cannot say for the present, I can only confirm that I really heard it. Physicist Balthasar van der Pol helped Hals and Stormer investigate the echoes, but due to the sporadic nature of the echo events and variations in time-delay, did not find a suitable explanation. Long delayed echoes have been heard sporadically from the first observations in 1927 and up to the present day. Five hypotheses Shlionskiy lists 15 possible natural explanations in two groups: reflections in outer space, and reflections within the Earth's magnetosphere. Vidmar and Crawford suggest five of them are the most likely. Sverre Holm, professor of signal processing at the University of Oslo details those five; in summary, Ducting in the Earth's magnetosphere and ionosphere at low HF frequencies (1–4 MHz). Some similarities with
https://en.wikipedia.org/wiki/Stroke%20ratio
In a reciprocating piston engine, the stroke ratio, defined by either bore/stroke ratio or stroke/bore ratio, is a term to describe the ratio between cylinder bore diameter and piston stroke length. This can be used for either an internal combustion engine, where the fuel is burned within the cylinders of the engine, or external combustion engine, such as a steam engine, where the combustion of the fuel takes place outside the working cylinders of the engine. A fairly comprehensive yet understandable study of stroke/bore effects was published in Horseless Age, 1916. Conventions In a piston engine, there are two different ways of describing the stroke ratio of its cylinders, namely: bore/stroke ratio, and stroke/bore ratio. Bore/stroke ratio Bore/stroke is the more commonly used term, with usage in North America, Europe, United Kingdom, Asia, and Australia. The diameter of the cylinder bore is divided by the length of the piston stroke to give the ratio. Square, oversquare and undersquare engines The following terms describe the naming conventions for the configurations of the various bore/stroke ratio: Square engine A square engine has equal bore and stroke dimensions, giving a bore/stroke value of exactly 1:1. Square engine examples 1953 – Ferrari 250 Europa had Lampredi V12 with bore and stroke. 1967 – FIAT 125, 124Sport engine 125A000-90 hp, 125B000-100 hp, 125BC000-110 hp, 1608 ccm, DOHC, bore and stroke. 1970 – Ford 400 had a bore and stroke. 1973 – Kawasaki Z1 and KZ(Z)900 had a bore and stroke. 1973 – British Leyland's Australian division created a 4.4-litre version of the Rover V8 engine, with bore and stroke both measuring 88.9 mm. This engine was exclusively used in the Leyland P76. 1982 - Honda Nighthawk 250 and Honda CMX250C Rebel have a bore and stroke, making it a square engine. 1983 – Mazda FE 2.0L inline four-cylinder engine with a perfectly squared bore and stroke. This engine also features the ideal 1.75:1 rod/stroke ratio. 1
https://en.wikipedia.org/wiki/Myositis%20ossificans
Myositis ossificans comprises two syndromes characterized by heterotopic ossification (calcification) of muscle. The World Health Organization, 2020, has grouped myositis ossificans together with fibro-osseous pseudotumor of digits as a single specific entity in the category of fibroblastic and myofibroblastic tumors. Classification In the first, and by far most common type, nonhereditary myositis ossificans (commonly referred to simply as "myositis ossificans", as in the remainder of this article), calcifications occur at the site of injured muscle, most commonly in the arms or in the quadriceps of the thighs. The term myositis ossificans traumatica is sometimes used when the condition is due to trauma. Also myositis ossificans circumscripta is another synonym of myositis ossificans traumatica refers to the new extraosseous bone that appears after trauma. The second condition, myositis ossificans progressiva (also referred to as fibrodysplasia ossificans progressiva) is an inherited affliction, autosomal dominant pattern, in which the ossification can occur without injury, and typically grows in a predictable pattern. Although this disorder can be passed to offspring by those afflicted with FOP, it is also classified as nonhereditary, as it is most often attributed to a spontaneous genetic mutation upon conception. Most (i.e. 80%) ossifications arise in the thigh or arm, and are caused by a premature return to activity after an injury. Other sites include intercostal spaces, erector spinae, pectoralis muscles, glutei, and the chest. On planar x-ray, hazy densities are sometimes noted approximately one month after injury, while the denser opacities eventually seen may not be apparent until two months have passed. Pathophysiology The exact mechanism of myositis ossificans is not clear. Inappropriate response of stem cells in the bone against the injury or inflammation causes inappropriate differentiation of fibroblasts into osteogenic cells. When a skeletal mu
https://en.wikipedia.org/wiki/Xenon%20%28video%20game%29
Xenon is a 1988 vertical scrolling shooter video game, the first developed by The Bitmap Brothers, and published by Melbourne House which was then owned by Mastertronic. It was featured as a play-by-phone game on the Saturday-morning kids' show Get Fresh. Xenon was followed in 1989 by Xenon 2: Megablast. Description According to the game's instruction manual, the player assumes the role of Darrian, a future space pilot in the Federation, currently at war with a mysterious and violent alien species called the Xenites that has lasted a decade. In response to a mayday transmission from Captain Xod following an attack on his trading fleet, Darrian is forced to travel through Xenite-occupied territory in order to support. Unlike most vertically scrolling shooters, the player craft has two modes, a flying plane and a ground tank. The transition between crafts can be initiated at almost any time during play (except during the mid- and end-of-level boss sections, as well as certain levels where a certain mode is forced), and the mode chosen depends on the nature of the threat the player faces. Destroying some enemies released power-ups the player could catch to enhance their ship. Ports Originally released for the Atari ST, Xenon was quickly ported to other platforms: the Amiga, Amstrad CPC, Commodore 64, DOS, MSX and ZX Spectrum. An arcade machine version of the game was also released through Mastertronic's Arcadia division which ran on Commodore Amiga hardware. Reception Xenon was almost universally well-received on launch, with reviewers from magazines covering a range of platforms all scoring the game very highly. Only German magazine Power Play bucked the trend, awarding it a score of 4.5 out of 10. Writing in New Computer Express about the 1991 budget re-release, Stuart Campbell stated that although the graphics were "gorgeous" and had "never really been seen before", the gameplay was "simply tedious" and the game was the first to "turn 'style-over-content' int
https://en.wikipedia.org/wiki/Essential%20complexity
Essential complexity is a numerical measure defined by Thomas J. McCabe, Sr., in his highly cited, 1976 paper better known for introducing cyclomatic complexity. McCabe defined essential complexity as the cyclomatic complexity of the reduced CFG (control-flow graph) after iteratively replacing (reducing) all structured programming control structures, i.e. those having a single entry point and a single exit point (for example if-then-else and while loops) with placeholder single statements. McCabe's reduction process is intended to simulate the conceptual replacement of control structures (and actual statements they contain) with subroutine calls, hence the requirement for the control structures to have a single entry and a single exit point. (Nowadays a process like this would fall under the umbrella term of refactoring.) All structured programs evidently have an essential complexity of 1 as defined by McCabe because they can all be iteratively reduced to a single call to a top-level subroutine. As McCabe explains in his paper, his essential complexity metric was designed to provide a measure of how far off this ideal (of being completely structured) a given program was. Thus greater than 1 essential complexity numbers, which can only be obtained for non-structured programs, indicate that they are further away from the structured programming ideal. To avoid confusion between various notions of reducibility to structured programs, it's important to note that McCabe's paper briefly discusses and then operates in the context of a 1973 paper by S. Rao Kosaraju, which gave a refinement (or alternative view) of the structured program theorem. The seminal 1966 paper of Böhm and Jacopini showed that all programs can be [re]written using only structured programming constructs, (aka the D structures: sequence, if-then-else, and while-loop), however, in transforming a random program into a structured program additional variables may need to be introduced (and used in the tes
https://en.wikipedia.org/wiki/Warm%20antibody%20autoimmune%20hemolytic%20anemia
Warm antibody autoimmune hemolytic anemia (WAIHA) is the most common form of autoimmune haemolytic anemia. About half of the cases are of unknown cause, with the other half attributable to a predisposing condition or medications being taken. Contrary to cold autoimmune hemolytic anemia (e.g., cold agglutinin disease and paroxysmal cold hemoglobinuria) which happens in cold temperature (28–31 °C), WAIHA happens at body temperature. Causes AIHA may be: Idiopathic, that is, without any known cause Secondary to another disease, such as an antecedent upper respiratory tract infection, systemic lupus erythematosus or a malignancy, such as chronic lymphocytic leukemia (CLL) Medications Several medications have been associated with the development of warm AIHA. These medications include quinidine, nonsteroidal anti-inflammatory drugs (NSAIDs), alpha methyldopa, and antibiotics such as penicillins, cephalosporins (such as ceftriaxone and cefotetan), and ciprofloxacin. Pathophysiology The most common antibody isotype involved in warm antibody AIHA is IgG, though sometimes IgA is found. The IgG antibodies attach to a red blood cell, leaving their FC portion exposed with maximal reactivity at 37 °C (versus cold antibody induced hemolytic anemia whose antibodies only bind red blood cells at low body temperatures, typically 28-31 °C). The FC region is recognized and grabbed onto by FC receptors found on monocytes and macrophages in the spleen. These cells will pick off portions of the red cell membrane, almost as if they are taking a bite. The loss of membrane causes the red blood cells to become spherocytes. Spherocytes are not as flexible as normal RBCs and will be singled-out for destruction in the red pulp of the spleen as well as other portions of the reticuloendothelial system. The red blood cells trapped in the spleen cause the spleen to enlarge, leading to the splenomegaly often seen in these patients. There are two models for this: the hapten model and the a
https://en.wikipedia.org/wiki/Streptococcus%20agalactiae
Streptococcus agalactiae (also known as group B streptococcus or GBS) is a gram-positive coccus (round bacterium) with a tendency to form chains (as reflected by the genus name Streptococcus). It is a beta-hemolytic, catalase-negative, and facultative anaerobe. S. agalactiae is the most common human pathogen of streptococci belonging to group B of the Rebecca Lancefield classification of streptococci. GBS are surrounded by a bacterial capsule composed of polysaccharides (exopolysacharide). The species is subclassified into ten serotypes (Ia, Ib, II–IX) depending on the immunologic reactivity of their polysaccharide capsule. The plural term group B streptococci (referring to the serotypes) and the singular term group B streptococcus (referring to the single species) are both commonly encountered (even though S. halichoeri and S. pseudoporcinus are also group B Streptococci). In general, GBS is a harmless commensal bacterium being part of the human microbiota colonizing the gastrointestinal and genitourinary tract of up to 30% of healthy human adults (asymptomatic carriers). Nevertheless, GBS can cause severe invasive infections especially in newborns, the elderly, and people with compromised immune systems. S. agalactiae is also a common veterinary pathogen, because it can cause bovine mastitis (inflammation of the udder) in dairy cows. The species name agalactiae meaning "of no milk", alludes to this. Laboratory identification GBS grows readily on blood agar plates as colonies surrounded by a narrow zone of β-hemolysis. GBS is characterized by the presence in the cell wall of the antigen group B of Lancefield classification (Lancefield grouping) that can be detected directly in intact bacteria using latex agglutination tests. The CAMP test is also another important test for identification of GBS. The CAMP factor produced by GBS acts synergistically with the staphylococcal β-hemolysin inducing enhanced hemolysis of sheep or bovine erythrocytes. GBS is also able
https://en.wikipedia.org/wiki/Hemolysis%20%28microbiology%29
Hemolysis (from Greek αιμόλυση, meaning 'blood breakdown') is the breakdown of red blood cells. The ability of bacterial colonies to induce hemolysis when grown on blood agar is used to classify certain microorganisms. This is particularly useful in classifying streptococcal species. A substance that causes hemolysis is a hemolysin. Types Alpha-hemolysis When alpha-hemolysis (α-hemolysis) is present, the agar under the colony is light and greenish. Streptococcus pneumoniae and a group of oral streptococci (Streptococcus viridans or viridans streptococci) display alpha hemolysis. This is sometimes called green hemolysis because of the color change in the agar. Other synonymous terms are incomplete hemolysis and partial hemolysis. Alpha hemolysis is caused by hydrogen peroxide produced by the bacterium, oxidizing hemoglobin producing the green oxidized derivative methemoglobin. Beta-hemolysis Beta-hemolysis (β-hemolysis), sometimes called complete hemolysis, is a complete lysis of red cells in the media around and under the colonies: the area appears lightened (yellow) and transparent. Streptolysin, an exotoxin, is the enzyme produced by the bacteria which causes the complete lysis of red blood cells. There are two types of streptolysin: Streptolysin O (SLO) and streptolysin S (SLS). Streptolysin O is an oxygen-sensitive cytotoxin, secreted by most Group A streptococcus (GAS) and Streptococcus dysgalactiae, and interacts with cholesterol in the membrane of eukaryotic cells (mainly red and white blood cells, macrophages, and platelets), and usually results in β-hemolysis under the surface of blood agar. Streptolysin S is an oxygen-stable cytotoxin also produced by most GAS strains which results in clearing on the surface of blood agar. SLS affects immune cells, including polymorphonuclear leukocytes and lymphocytes, and is thought to prevent the host immune system from clearing infection. Streptococcus pyogenes, or Group A beta-hemolytic Strep (GAS), displa
https://en.wikipedia.org/wiki/Symmetric%20hydrogen%20bond
A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a 3-center 4-electron bond. This type of bond is much stronger than "normal" hydrogen bonds, in fact, its strength is comparable to a covalent bond. It is seen in ice at high pressure (Ice X), and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F]−. Much has been done to explain the symmetric hydrogen bond quantum-mechanically, as it seems to violate the duet rule for the first shell: The proton is effectively surrounded by four electrons. Because of this problem, some consider it to be an ionic bond.
https://en.wikipedia.org/wiki/Atoms%20in%20molecules
In quantum chemistry, the quantum theory of atoms in molecules (QTAIM), sometimes referred to as atoms in molecules (AIM), is a model of molecular and condensed matter electronic systems (such as crystals) in which the principal objects of molecular structure - atoms and bonds - are natural expressions of a system's observable electron density distribution function. An electron density distribution of a molecule is a probability distribution that describes the average manner in which the electronic charge is distributed throughout real space in the attractive field exerted by the nuclei. According to QTAIM, molecular structure is revealed by the stationary points of the electron density together with the gradient paths of the electron density that originate and terminate at these points. QTAIM was primarily developed by Professor Richard Bader and his research group at McMaster University over the course of decades, beginning with analyses of theoretically calculated electron densities of simple molecules in the early 1960s and culminating with analyses of both theoretically and experimentally measured electron densities of crystals in the 90s. The development of QTAIM was driven by the assumption that, since the concepts of atoms and bonds have been and continue to be so ubiquitously useful in interpreting, classifying, predicting and communicating chemistry, they should have a well-defined physical basis. QTAIM recovers the central operational concepts of the molecular structure hypothesis, that of a functional grouping of atoms with an additive and characteristic set of properties, together with a definition of the bonds that link the atoms and impart the structure. QTAIM defines chemical bonding and structure of a chemical system based on the topology of the electron density. In addition to bonding, QTAIM allows the calculation of certain physical properties on a per-atom basis, by dividing space up into atomic volumes containing exactly one nucleus, wh
https://en.wikipedia.org/wiki/Timeline%20of%20cryptography
Below is a timeline of notable events related to cryptography. B.C. 36th century The Sumerians develop cuneiform writing and the Egyptians develop hieroglyphic writing. 16th century The Phoenicians develop an alphabet 600-500 Hebrew scholars make use of simple monoalphabetic substitution ciphers (such as the Atbash cipher) c. 400 Spartan use of scytale (alleged) c. 400 Herodotus reports use of steganography in reports to Greece from Persia (tattoo on shaved head) 100-1 A.D.- Notable Roman ciphers such as the Caesar cipher. 1–1799 A.D. 801–873 A.D. Cryptanalysis and frequency analysis leading to techniques for breaking monoalphabetic substitution ciphers are developed in A Manuscript on Deciphering Cryptographic Messages by the Muslim mathematician, Al-Kindi (Alkindus), who may have been inspired by textual analysis of the Qur'an. He also covers methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. 1355-1418 Ahmad al-Qalqashandi writes Subh al-a 'sha, a 14-volume encyclopedia including a section on cryptology, attributed to Ibn al-Durayhim (1312–1361). The list of ciphers in this work include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter. It also included an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which cannot occur together in one word. 1450 The Chinese develop wooden block movable type printing. 1450–1520 The Voynich manuscript, an example of a possibly encoded illustrated book, is written. 1466 Leon Battista Alberti invents polyalphabetic cipher, also first known mechanical cipher machine 1518 Johannes Trithemius' book on cryptology 1553 Bellaso invents Vigenère cipher 1585 Vigenère's book on ciphers 1586 Cryptanalysis used by spymaster Sir Francis Walsingham to implicate Mary, Queen of Scots, in the Babingto
https://en.wikipedia.org/wiki/Harlan%20Mills
Harlan D. Mills (May 14, 1919 – January 8, 1996) was Professor of Computer Science at the Florida Institute of Technology and founder of Software Engineering Technology, Inc. of Vero Beach, Florida (since acquired by Q-Labs). Mills' contributions to software engineering have had a profound and enduring effect on education and industrial practice. Since earning his Ph.D. in Mathematics at Iowa State University in 1952, Mills led a distinguished career. As an IBM research fellow, Mills adapted existing ideas from engineering and computer science to software development. These included automata theory, the structured programming theory of Edsger Dijkstra, Robert W. Floyd, and others, and Markov chain-driven software testing. His Cleanroom software development process emphasized top-down design and formal specification. Mills contributed his ideas to the profession in six books and over fifty refereed articles in technical journals. Mills was termed a "super-programmer", a term which would evolve to the concept in IBM of a "Chief Programmer." Achievements Ph.D.: Iowa State University, 1952 Visiting Professor (Part Time) 1975-1987 Adjunct Professor, 1987-1995 Chairman, NSF Computer Science Research Panel on Software Methodology, 1974–77 the Chairman of the First National Conference on Software Engineering, 1975 Editor for IEEE Transactions on Software Engineering, 1975–81 U.S. Representative for Software at the IFIP Congress, 1977 Governor of the IEEE Computer Society, 1980–83 Chairman for IEEE Fall CompCon, 1981 Chairman, Computer Science Panel, U.S. Air Force Scientific Advisory Board, 1986 Awardee, Distinguished Information Sciences Award, DPMA 1985 Designer of initial NFL scheduling algorithm (http://trace.tennessee.edu/utk_harlan/407/) The ICSE-affiliated colloquium "Science and Engineering for Software Development" is being organized in honor of Harlan D. Mills, and as a recognition of his enduring legacy to the theory and practice of software engi
https://en.wikipedia.org/wiki/Puffed%20rice
Puffed rice and popped rice (or pop rice) are types of puffed grain made from rice commonly eaten in the traditional cuisines of Southeast Asia, East Asia, and South Asia. It has also been produced commercially in the West since 1904 and is popular in breakfast cereals and other snack foods. Traditional methods to puff or pop rice include frying in oil or salt. Western commercial puffed rice is usually made by heating rice kernels under high pressure in the presence of steam, though the method of manufacture varies widely. They are either eaten as loose grains or made into puffed rice cakes. Description While the terms "puffed rice" and "popped rice" are used interchangeably, they are properly different processes. Puffed rice refers to pre-gelatinized rice grains (either by being parboiled, boiled, or soaked) that are puffed by the rapid expansion of steam upon cooking. Puffed rice retains the shape of the rice grain, but is much larger. Popped rice, on the other hand, refers to rice grains where the hull or the bran is intact. When cooked, the kernel explodes through the hard outer covering due to heating. Popped rice has an irregular shape similar to popcorn. There are various methods, both modern and traditional, for making puffed and popped rice. Traditional versions by region East Asia Puffed rice or other grains are occasionally found as street food in China (called "mixiang" 米香), Taiwan (called "bí-phang" 米芳), Korea (called "ppeong twigi" 뻥튀기), and Japan (called "pon gashi" ポン菓子), where hawkers implement the puffing process using an integrated pushcart/puffer featuring a rotating steel pressure chamber heated over an open flame. The great booming sound produced by the release of pressure serves as advertising. Mainland China The earliest mention of puffed rice in Mainland China is in Zhejiang Province, from a book by Fan Chengda written in the Song Dynasty (c. 1100). It was part of the rituals of the Spring Festival and was made in large cooking pots kn
https://en.wikipedia.org/wiki/Isotopic%20signature
An isotopic signature (also isotopic fingerprint) is a ratio of non-radiogenic 'stable isotopes', stable radiogenic isotopes, or unstable radioactive isotopes of particular elements in an investigated material. The ratios of isotopes in a sample material are measured by isotope-ratio mass spectrometry against an isotopic reference material. This process is called isotope analysis. Stable isotopes The atomic mass of different isotopes affect their chemical kinetic behavior, leading to natural isotope separation processes. Carbon isotopes For example, different sources and sinks of methane have different affinity for the 12C and 13C isotopes, which allows distinguishing between different sources by the 13C/12C ratio in methane in the air. In geochemistry, paleoclimatology and paleoceanography this ratio is called δ13C. The ratio is calculated with respect to Pee Dee Belemnite (PDB) standard: ‰ Similarly, carbon in inorganic carbonates shows little isotopic fractionation, while carbon in materials originated by photosynthesis is depleted of the heavier isotopes. In addition, there are two types of plants with different biochemical pathways; the C3 carbon fixation, where the isotope separation effect is more pronounced, C4 carbon fixation, where the heavier 13C is less depleted, and Crassulacean Acid Metabolism (CAM) plants, where the effect is similar but less pronounced than with C4 plants. Isotopic fractionation in plants is caused by physical (slower diffusion of 13C in plant tissues due to increased atomic weight) and biochemical (preference of 12C by two enzymes: RuBisCO and phosphoenolpyruvate carboxylase) factors. The different isotope ratios for the two kinds of plants propagate through the food chain, thus it is possible to determine if the principal diet of a human or an animal consists primarily of C3 plants (rice, wheat, soybeans, potatoes) or C4 plants (corn, or corn-fed beef) by isotope analysis of their flesh and bone collagen (however, to obtain
https://en.wikipedia.org/wiki/Motor%20control
Motor control is the regulation of movement in organisms that possess a nervous system. Motor control includes reflexes as well as directed movement. To control movement, the nervous system must integrate multimodal sensory information (both from the external world as well as proprioception) and elicit the necessary signals to recruit muscles to carry out a goal. This pathway spans many disciplines, including multisensory integration, signal processing, coordination, biomechanics, and cognition, and the computational challenges are often discussed under the term sensorimotor control. Successful motor control is crucial to interacting with the world to carry out goals as well as for posture, balance, and stability. Some researchers (mostly neuroscientists studying movement, such as Daniel Wolpert and Randy Flanagan) argue that motor control is the reason brains exist at all. Neural control of muscle force All movements, e.g. touching your nose, require motor neurons to fire action potentials that results in contraction of muscles. In humans, ~150,000 motor neurons control the contraction of ~600 muscles. To produce movements, a subset of 600 muscles must contract in a temporally precise pattern to produce the right force at the right time. Motor units and force production A single motor neuron and the muscle fibers it innervates are called a motor unit. For example, the rectus femoris contains approximately 1 million muscle fibers, which are controlled by around 1000 motor neurons. Activity in the motor neuron causes contraction in all of the innervated muscle fibers so that they function as a unit. Increasing action potential frequency (spike rate) in the motor neuron increases the muscle fiber contraction force, up to the maximal force. The maximal force depends on the contractile properties of the muscle fibers. Within a motor unit, all the muscle fibers are of the same type (e.g. type I (slow twitch) or Type II fibers (fast twitch)), and motor units of mult
https://en.wikipedia.org/wiki/Spin%20structure
In differential geometry, a spin structure on an orientable Riemannian manifold allows one to define associated spinor bundles, giving rise to the notion of a spinor in differential geometry. Spin structures have wide applications to mathematical physics, in particular to quantum field theory where they are an essential ingredient in the definition of any theory with uncharged fermions. They are also of purely mathematical interest in differential geometry, algebraic topology, and K theory. They form the foundation for spin geometry. Overview In geometry and in field theory, mathematicians ask whether or not a given oriented Riemannian manifold (M,g) admits spinors. One method for dealing with this problem is to require that M has a spin structure. This is not always possible since there is potentially a topological obstruction to the existence of spin structures. Spin structures will exist if and only if the second Stiefel–Whitney class w2(M) ∈ H2(M, Z2) of M vanishes. Furthermore, if w2(M) = 0, then the set of the isomorphism classes of spin structures on M is acted upon freely and transitively by H1(M, Z2) . As the manifold M is assumed to be oriented, the first Stiefel–Whitney class w1(M) ∈ H1(M, Z2) of M vanishes too. (The Stiefel–Whitney classes wi(M) ∈ Hi(M, Z2) of a manifold M are defined to be the Stiefel–Whitney classes of its tangent bundle TM.) The bundle of spinors πS: S → M over M is then the complex vector bundle associated with the corresponding principal bundle πP: P → M of spin frames over M and the spin representation of its structure group Spin(n) on the space of spinors Δn. The bundle S is called the spinor bundle for a given spin structure on M. A precise definition of spin structure on manifold was possible only after the notion of fiber bundle had been introduced; André Haefliger (1956) found the topological obstruction to the existence of a spin structure on an orientable Riemannian manifold and Max Karoubi (1968) extended this result
https://en.wikipedia.org/wiki/Financial%20modeling
Financial modeling is the task of building an abstract representation (a model) of a real world financial situation. This is a mathematical model designed to represent (a simplified version of) the performance of a financial asset or portfolio of a business, project, or any other investment. Typically, then, financial modeling is understood to mean an exercise in either asset pricing or corporate finance, of a quantitative nature. It is about translating a set of hypotheses about the behavior of markets or agents into numerical predictions. At the same time, "financial modeling" is a general term that means different things to different users; the reference usually relates either to accounting and corporate finance applications or to quantitative finance applications. Accounting In corporate finance and the accounting profession, financial modeling typically entails financial statement forecasting; usually the preparation of detailed company-specific models used for decision making purposes and financial analysis. Applications include: Business valuation and stock valuation - especially via discounted cash flow, but including other valuation approaches Scenario planning and management decision making ("what is"; "what if"; "what has to be done") Budgeting: revenue forecasting and analytics; production budgeting; operations budgeting Capital budgeting, including cost of capital (i.e. WACC) calculations Cash flow forecasting; working capital- and treasury management; asset and liability management Financial statement analysis / ratio analysis (including of operating- and finance leases, and R&D) Transaction analytics: M&A, PE, VC, LBO, IPO, Project finance, P3 Credit decisioning: Credit analysis, Consumer credit risk; impairment- and provision-modeling Management accounting: Activity-based costing, Profitability analysis, Cost analysis, Whole-life cost, Managerial risk accounting To generalize as to the nature of these models: firstly, as they are built a
https://en.wikipedia.org/wiki/VAXELN
VAXELN (typically pronounced "VAX-elan") is a discontinued real-time operating system for the VAX family of computers produced by the Digital Equipment Corporation (DEC) of Maynard, Massachusetts. As with RSX-11 and VMS, Dave Cutler was the principal force behind the development of this operating system. Cutler's team developed the product after moving to the Seattle, Washington area to form the DECwest Engineering Group; DEC's first engineering group outside New England. Initial target platforms for VAXELN were the backplane interconnect computers such as the V-11 family. When VAXELN was well under way, Cutler spearheaded the next project, the MicroVAX I, the first VAX microcomputer. Although it was a low-volume product compared with the New England-developed MicroVAX II, the MicroVAX I demonstrated the set of architectural decisions needed to support a single-board implementation of the VAX computer family, and it also provided a platform for embedded system applications written for VAXELN. The VAXELN team made the decision, for the first release, to use the programming language Pascal as its system programming language. The development team built the first product in approximately 18 months. Other languages, including C, Ada, and Fortran were supported in later releases of the system as optional extras. A relational database, named VAX Rdb/ELN was another optional component of the system. Later versions of VAXELN supported an X11 server named EWS (VAXELN Window Server). VAXELN with EWS was used as the operating system for the VT1300 X terminal, and was sometimes used to convert old VAXstation hardware into X terminals. Beginning with version 4.3, VAXELN gained support for TCP/IP networking and a subset of POSIX APIs. VAXELN allowed the creation of a self-contained embedded system application that would run on VAX (and later MicroVAX) hardware with no other operating system present. The system was debuted in Las Vegas in the early 1980s, with a variety of amusi
https://en.wikipedia.org/wiki/Bast%20fibre
Bast fibre (also called phloem fibre or skin fibre) is plant fibre collected from the phloem (the "inner bark", sometimes called "skin") or bast surrounding the stem of certain dicotyledonous plants. It supports the conductive cells of the phloem and provides strength to the stem. Some of the economically important bast fibres are obtained from herbs cultivated in agriculture, as for instance flax, hemp, or ramie, but bast fibres from wild plants, as stinging nettle, and trees such as lime or linden, willow, oak, wisteria, and mulberry have also been used in the past. Bast fibres are classified as soft fibres, and are flexible. Fibres from monocotyledonous plants, called "leaf fiber", are classified as hard fibres and are stiff. Since the valuable fibres are located in the phloem, they must often be separated from the xylem material ("woody core"), and sometimes also from the epidermis. The process for this is called retting, and can be performed by micro-organisms either on land (nowadays the most important) or in water, or by chemicals (for instance high pH and chelating agents) or by pectinolytic enzymes. In the phloem, bast fibres occur in bundles that are glued together by pectin and calcium ions. More intense retting separates the fibre bundles into elementary fibres, that can be several centimetres long. Often bast fibres have higher tensile strength than other kinds, and are used in high-quality textiles (sometimes in blends with cotton or synthetic fibres), ropes, yarn, paper, composite materials and burlap. An important property of bast fibres is that they contain a special structure, the fibre node, that represents a weak point, and gives flexibility. Seed hairs, such as cotton, do not have nodes. Etymology The term "bast" derives from Old English bæst ("inner bark of trees from which ropes were made"), from Proto-Germanic *bastaz ("bast, rope"). It may have the same root as Latin ("bundle") and Middle Irish basc ("necklace"). Use of bast fibre Plants
https://en.wikipedia.org/wiki/Square%20planar%20molecular%20geometry
The square planar molecular geometry in chemistry describes the stereochemistry (spatial arrangement of atoms) that is adopted by certain chemical compounds. As the name suggests, molecules of this geometry have their atoms positioned at the corners. Examples Numerous compounds adopt this geometry, examples being especially numerous for transition metal complexes. The noble gas compound xenon tetrafluoride adopts this structure as predicted by VSEPR theory. The geometry is prevalent for transition metal complexes with d8 configuration, which includes Rh(I), Ir(I), Pd(II), Pt(II), and Au(III). Notable examples include the anticancer drugs cisplatin, [PtCl2(NH3)2], and carboplatin. Many homogeneous catalysts are square planar in their resting state, such as Wilkinson's catalyst and Crabtree's catalyst. Other examples include Vaska's complex and Zeise's salt. Certain ligands (such as porphyrins) stabilize this geometry. Splitting of d-orbitals A general d-orbital splitting diagram for square planar (D4h) transition metal complexes can be derived from the general octahedral (Oh) splitting diagram, in which the dz2 and the dx2−y2 orbitals are degenerate and higher in energy than the degenerate set of dxy, dxz and dyz orbitals. When the two axial ligands are removed to generate a square planar geometry, the dz2 orbital is driven lower in energy as electron-electron repulsion with ligands on the z-axis is no longer present. However, for purely σ-donating ligands the dz2 orbital is still higher in energy than the dxy, dxz and dyz orbitals because of the torus shaped lobe of the dz2 orbital. It bears electron density on the x- and y-axes and therefore interacts with the filled ligand orbitals. The dxy, dxz and dyz orbitals are generally presented as degenerate but they have to split into two different energy levels with respect to the irreducible representations of the point group D4h. Their relative ordering depends on the nature of the particular complex. Furthermore,
https://en.wikipedia.org/wiki/Helminthic%20therapy
Helminthic therapy, an experimental type of immunotherapy, is the treatment of autoimmune diseases and immune disorders by means of deliberate infestation with a helminth or with the eggs of a helminth. Helminths are parasitic worms such as hookworms, whipworms, and threadworms that have evolved to live within a host organism on which they rely for nutrients. These worms are members of two phyla: nematodes, which are primarily used in human helminthic therapy, and flat worms (trematodes). Helminthic therapy consists of the inoculation of the patient with specific parasitic intestinal nematodes (or other helminths). A number of such organisms are currently being investigated for their use as treatment, including: Trichuris suis ova, commonly known as pig whipworm eggs; Necator americanus, commonly known as hookworms; Trichuris trichiura ova, commonly referred to as human whipworm eggs; and Hymenolepis diminuta, commonly known as rat tapeworm cysticerci. While the latter four species may be considered to be mutualists – providing benefit to their host without causing longterm harm – there are other helminth species that have demonstrated therapeutic effects but which also have a potential to cause less desirable or even harmful effects and therefore do not share the ideal characteristics for a therapeutic helminth. These include Ascaris lumbricoides, commonly known as human giant roundworm; Strongyloides stercoralis, commonly known as human roundworm; Enterobius vermicularis, commonly known as pinworm or threadworm; and Hymenolepis nana, also known as dwarf tapeworm. Current research targets Crohn's disease, ulcerative colitis, inflammatory bowel disease, coeliac disease, multiple sclerosis and asthma. Helminth infection has emerged as one possible explanation for the low incidence of autoimmune diseases and allergies in less developed countries, while reduced infection rates have been linked with the significant and sustained increase in autoimmune diseases seen
https://en.wikipedia.org/wiki/QLogic
QLogic Corporation was an American manufacturer of networking server and storage networking connectivity and application acceleration products, based in Aliso Viejo, California through 2016. QLogic's products include Fibre Channel adapters, converged network adapters for Fibre Channel over Ethernet (FCoE), Ethernet network interface controllers, iSCSI adapters, and application-specific integrated circuits (ASICs). It was a public company from 1992 to 2016. History QLogic was created in 1992 after being spun off by Emulex. QLogic's original business was disk controllers. QLogic had its initial public offering in 1994 and was traded on NASDAQ under the symbol QLGC. Originally located in a Costa Mesa, California building adjacent to Emulex, it competed against its parent company in the market for Fibre Channel controllers for storage area networks. QLogic acquired companies including NetXen. Integrated circuit designer Silicon Design Resources Inc. based in Austin, Texas, was acquired for about $2 million in 1998. In May 2000, QLogic acquired Fibre Channel switch maker Ancor Communications for about $1.7 billion in stock. Little Mountain Group, founded in 1999 and developer of iSCSI technology, was acquired in January 2001 for about $30 million. The compiler company PathScale was acquired for about $109 million in February 2006. Silverstorm Technologies, which designed InfiniBand products, was acquired in October 2006 for about $60 million. After attempting to use PathScale for cluster computing over InfiniBand, the compiler business was sold to SiCortex in August 2006. QLogic was led by chairman H.K. Desai from 1996, who became executive chairman in 2010 until his death in June 2014. In 2012, the InfiniBand products were sold to Intel for $125 million. Simon Biddiscombe became chief executive in November 2010, until resigning in May 2013 after two years of falling revenue. Prasad Rampalli became chief executive a few months later, until August 2015. Jean Hu becam
https://en.wikipedia.org/wiki/FC-HBA%20API
In computing, the FC-HBA API (also called the SNIA Common HBA API) is an Application Programming Interface for Host Bus Adapters connecting computers to hard disks via a Fibre Channel network. It was developed by the Storage Networking Industry Association and published by the T11.5 committee of INCITS An "early implementers version" was published in 2000, and the current version was completed in 2002. According to the FAQ, "the HBA API has been overwhelmingly adopted by Storage Area Network vendors to help manage, monitor, and deploy storage area networks in an interoperable way." Vendors supply their own library (written in C) as plugins for a common HBA library. Operating system support Windows Server 2003, AIX 5,HPUX and Solaris include support for FC-HBA API and it is being added to Linux. See also SM HBA
https://en.wikipedia.org/wiki/Pareto%20front
In multi-objective optimization, the Pareto front (also called Pareto frontier or Pareto curve) is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter. Definition The Pareto frontier, P(Y), may be more formally described as follows. Consider a system with function , where X is a compact set of feasible decisions in the metric space , and Y is the feasible set of criterion vectors in , such that . We assume that the preferred directions of criteria values are known. A point is preferred to (strictly dominates) another point , written as . The Pareto frontier is thus written as: Marginal rate of substitution A significant aspect of the Pareto frontier in economics is that, at a Pareto-efficient allocation, the marginal rate of substitution is the same for all consumers. A formal statement can be derived by considering a system with m consumers and n goods, and a utility function of each consumer as where is the vector of goods, both for all i. The feasibility constraint is for . To find the Pareto optimal allocation, we maximize the Lagrangian: where and are the vectors of multipliers. Taking the partial derivative of the Lagrangian with respect to each good for and gives the following system of first-order conditions: where denotes the partial derivative of with respect to . Now, fix any and . The above first-order condition imply that Thus, in a Pareto-optimal allocation, the marginal rate of substitution must be the same for all consumers. Computation Algorithms for computing the Pareto frontier of a finite set of alternatives have been studied in computer science and power engineering. They include: "The maxima of a point set" "The maximum vector problem" or the skyline query "The scalarization algorithm" or the method o