source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Snippet%20%28programming%29
Snippet is a programming term for a small region of re-usable source code, machine code, or text. Ordinarily, these are formally defined operative units to incorporate into larger programming modules. Snippet management is a feature of some text editors, program source code editors, IDEs, and related software. It allows the user to avoid repetitive typing in the course of routine edit operations. Definition In programming practice, "snippet" refers narrowly to a portion of source code that is literally included by an editor program into a file, and is a form of copy and paste programming. This concrete inclusion is in contrast to abstraction methods, such as functions or macros, which are abstraction within the language. Snippets are thus primarily used when these abstractions are not available or not desired, such as in languages that lack abstraction, or for clarity and absence of overhead. Snippets are similar to having static preprocessing included in the editor, and do not require support by a compiler. On the flip side, this means that snippets cannot be invariably modified after the fact, and thus is vulnerable to all of the problems of copy and paste programming. For this reason snippets are primarily used for simple sections of code (with little logic), or for boilerplate, such as copyright notices, function prototypes, common control structures, or standard library imports. Overview Snippet management is a text editor feature popular among software developers or others who routinely require content from a catalogue of repeatedly entered text (such as with source code or boilerplate). Often this feature is justified because the content varies only slightly (or not at all) each time it is entered. Snippets in text editors Text editors that include this feature ordinarily provide a mechanism to manage the catalogue, and separate "snippets" in the same manner that the text editor and operating system allow management of separate files. These basic manag
https://en.wikipedia.org/wiki/Welch%E2%80%93Satterthwaite%20equation
In statistics and uncertainty analysis, the Welch–Satterthwaite equation is used to calculate an approximation to the effective degrees of freedom of a linear combination of independent sample variances, also known as the pooled degrees of freedom, corresponding to the pooled variance. For sample variances , each respectively having degrees of freedom, often one computes the linear combination. where is a real positive number, typically . In general, the probability distribution of {{math|χ}} cannot be expressed analytically. However, its distribution can be approximated by another chi-squared distribution, whose effective degrees of freedom are given by the Welch–Satterthwaite equation''' There is no assumption that the underlying population variances are equal. This is known as the Behrens–Fisher problem. The result can be used to perform approximate statistical inference tests. The simplest application of this equation is in performing Welch's t-test. See also Pooled variance References Further reading Michael Allwood (2008) "The Satterthwaite Formula for Degrees of Freedom in the Two-Sample t-Test", AP Statistics'', Advanced Placement Program, The College Board. Theorems in statistics Equations Statistical approximations
https://en.wikipedia.org/wiki/Hjulstr%C3%B6m%20curve
The Hjulström curve, named after Filip Hjulström (1902–1982), is a graph used by hydrologists and geologists to determine whether a river will erode, transport, or deposit sediment. It was originally published in his doctoral thesis "Studies of the morphological activity of rivers as illustrated by the River Fyris." in 1935. The graph takes sediment particle size and water velocity into account. The upper curve shows the critical erosion velocity in cm/s as a function of particle size in mm, while the lower curve shows the deposition velocity as a function of particle size. Note that the axes are logarithmic. The plot shows several key concepts about the relationships between erosion, transportation, and deposition. For particle sizes where friction is the dominating force preventing erosion, the curves follow each other closely and the required velocity increases with particle size. However, for cohesive sediment, mostly clay but also silt, the erosion velocity increases with decreasing grain size, as the cohesive forces are relatively more important when the particles get smaller. The critical velocity for deposition, on the other hand, depends on the settling velocity, and that decreases with decreasing grainsize. The Hjulström curve shows that sand particles of a size around 0.1 mm require the lowest stream velocity to erode. The curve was expanded by Åke Sundborg in 1956. He significantly improved the level of detail in the cohesive part of the diagram, and added lines for different modes of transportation. The result is called the Sundborg diagram, or the Hjulström-Sundborg Diagram, in the academic literature. This curve dates back to early 20th century research on river geomorphology and has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity
https://en.wikipedia.org/wiki/Apollo%20PRISM
PRISM (Parallel Reduced Instruction Set Multiprocessor) was Apollo Computer's high-performance CPU used in their DN10000 series workstations. It was for some time the fastest microprocessor available, a high fraction of a Cray-1 in a workstation. Hewlett-Packard purchased Apollo in 1989, ending development of PRISM, although some of PRISM's ideas were later used in HP's own HP-PA Reduced instruction set computer (RISC) and Itanium processors. PRISM was based on what would be known today as a VLIW-design, while most efforts of the era, 1988, were based on a more "pure" RISC approach. In early RISC designs, the core processor was simplified as much as possible in order to allow more of the chip's real-estate to be used for registers and simplifying the addition of instruction pipelines for improved performance. Compilers The compilers used with the systems were expected to dedicate more time during compilation to making effective use of the registers and cleaning the instruction stream. By doing instruction scheduling in the compiler, this design avoided the problems and complexity of dynamic instruction scheduling (where instructions for multiple functional units must be selected carefully in order to avoid interdependencies between intermediate values) encountered in superscalar designs such as Digital Equipment Corporation's Alpha. In some respects, the VLIW design can be thought of as "super-RISCy", as it offloads the instruction selection process to the compiler as well. In the VLIW design, the compiler examines the code and selects instructions that are known to be "safe", and then packages them into longer instruction words. For instance, for a CPU with two functional units, like the PRISM, the compiler would find pairs of safe instructions and stuff them into a single larger word. Inside the CPU, the instructions are simply split apart again, and fed into the selected units. This design minimizes logical changes to the CPU as functional units are added,
https://en.wikipedia.org/wiki/CompuCell3D
CompuCell3D (CC3D) is a three-dimensional C++ and Python software problem solving environment for simulations of biocomplexity problems, integrating multiple mathematical [morphogenesis] models. These include the cellular Potts model (CPM) which can model cell clustering, growth, division, death, adhesion, and volume and surface area constraints; as well as partial differential equation solvers for modeling reaction–diffusion of external chemical fields and cell type automata for differentiation. By integrating these models CompuCell3D enables modeling of cellular reactions to external chemical fields such as secretion or resorption, and responses such as chemotaxis and haptotaxis. CompuCell3D is conducive for experimentation and testing biological models by providing a flexible and extensible package, with many different levels of control. High-level steering is possible through CompuCell Player, an interactive GUI built upon Qt threads which execute in parallel with the computational back end. Functionality such as zooming, rotation, playing and pausing simulations, setting colors and viewing cross sections is available through the player, with a sample screenshot shown below. Extending the back end is possible through an XML-based domain-specific language Biologo, which after lexical analysis and generation transparently converts to C++ extensions which can be compiled and dynamically loaded at runtime. The back end uses object-oriented design patterns which contribute to extensibility, reducing coupling between independently operating modules. Optional functionality can be encapsulated through plugins, which are dynamically loaded at runtime through an XML configuration file reference. CompuCell3D can model several different phenomena, including avian limb development, in vitro capillary development, adhesion-driven cell sorting, Dictyostelium discoideum, and fluid flows. The binaries and source code, as well as documentation and examples, are available at th
https://en.wikipedia.org/wiki/GgNMOS
Grounded-gate NMOS, commonly known as ggNMOS, is an electrostatic discharge (ESD) protection device used within CMOS integrated circuits (ICs). Such devices are used to protect the inputs and outputs of an IC, which can be accessed off-chip (wire-bonded to the pins of a package or directly to a printed circuit board) and are therefore subject to ESD when touched. An ESD event can deliver a large amount of energy to the chip, potentially destroying input/output circuitry; a ggNMOS device or other ESD protective devices provide a safe path for current to flow, instead of through more sensitive circuitry. ESD protection by means of such devices or other techniques is important to product reliability: 35% of all IC failures in the field are associated with ESD damage. Structure As the name implies, a ggNMOS device consists of a relatively wide NMOS device in which the gate, source, and body are tied together to ground. The drain of the ggNMOS is connected to the I/O pad under protection. A parasitic NPN bipolar junction transistor (BJT) is thus formed with the drain (n-type) acting as the collector, the base/source combination (n-type) as the emitter, and the substrate (p-type) as the base. As is explained below, a key element to the operation of the ggNMOS is the parasitic resistance present between the emitter and base terminals of the parasitic npn BJT. This resistance is a result of the finite conductivity of the p-type doped substrate. Operation When a positive ESD event appears upon the I/O pad (drain), the collector-base junction of the parasitic NPN BJT becomes reverse biased to the point of avalanche breakdown. At this point, the positive current flowing from the base to ground induces a voltage potential across the parasitic resistor, causing a positive voltage to appear across the base-emitter junction. The positive VBE forward biases this junction, triggering the parasitic NPN BJT. References https://www.researchgate.net/publication/4133911_Modelin
https://en.wikipedia.org/wiki/Nautilus%20%28secure%20telephone%29
Nautilus is a program which allows two parties to securely communicate using modems or TCP/IP. It runs from a command line and is available for the Linux and Windows operating systems. The name was based upon Jules Verne's Nautilus and its ability to overcome a Clipper ship as a play on Clipper chip. The program was originally developed by Bill Dorsey, Andy Fingerhut, Paul Rubin, Bill Soley, and David Miller. Nautilus is historically significant in the realm of secure communications because it was one of the first programs which were released as open source to the general public which used strong encryption. It was created as a response to the Clipper chip in which the US government planned to use a key escrow scheme on all products which used the chip. This would allow them to monitor "secure" communications. Once this program and another similar program PGPfone were available on the internet, the proverbial cat was "out of the bag" and it would have been nearly impossible to stop the use of strong encryption for telephone communications. The project had to move their web presence by the end of May 2014 due to the decision of to shut down the developer platform that hosted the project. External links new Nautilus homepage from May 1 2014 on "Can Nautilus Sink Clipper?" Article in Wired, Aug 1995 Secure communication Cryptographic software VoIP software
https://en.wikipedia.org/wiki/Voice%20inversion
Voice inversion scrambling is an analog method of obscuring the content of a transmission. It is sometimes used in public service radio, automobile racing, cordless telephones and the Family Radio Service. Without a descrambler, the transmission makes the speaker "sound like Donald Duck". Despite the term, the technique operates on the passband of the information and so can be applied to any information being transmitted. Forms and details There are various forms of voice inversion which offer differing levels of security. Overall, voice inversion scrambling offers little true security as software and even hobbyist kits are available from kit makers for scrambling and descrambling. The cadence of the speech is not changed. It is often easy to guess what is happening in the conversation by listening for other audio cues like questions, short responses and other language cadences. In the simplest form of voice inversion, the frequency of each component is replaced with , where is the frequency of a carrier wave. This can be done by amplitude modulating the speech signal with the carrier, then applying a low-pass filter to select the lower sideband. This will make the low tones of the voice sound like high ones and vice versa. This process also occurs naturally if a radio receiver is tuned to a single sideband transmission but set to decode the wrong sideband. There are more advanced forms of voice inversion which are more complex and require more effort to descramble. One method is to use a random code to choose the carrier frequency and then change this code in real time. This is called Rolling Code voice inversion and one can often hear the "ticks" in the transmission which signal the changing of the inversion point. Another method is split band voice inversion. This is where the band is split and then each band is inverted separately. A rolling code can also be added to this method for variable split band inversion (VSB). Common carrier frequencies are: 2.
https://en.wikipedia.org/wiki/Cliff%20effect
In telecommunications, the (digital) cliff effect or brickwall effect is a sudden loss of digital signal reception. Unlike analog signals, which gradually fade when signal strength decreases or electromagnetic interference or multipath increases, a digital signal provides data which is either perfect or non-existent at the receiving end. It is named for a graph of reception quality versus signal quality, where the digital signal "falls off a cliff" instead of having a gradual rolloff. This is an example of an EXIT chart. The phenomenon is primarily seen in broadcasting, where signal strength is liable to vary, rather than in recorded media, which generally have a good signal. However, it may be seen in significantly damaged media, which is at the edge of readability. Broadcasting Digital television This effect can most easily be seen on digital television, including both satellite TV and over-the-air terrestrial TV. While forward error correction is applied to the broadcast, when a minimum threshold of signal quality (a maximum bit error rate) is reached it is no longer enough for the decoder to recover. The picture may break up (macroblocking), lock on a freeze frame, or go blank. Causes include rain fade or solar transit on satellites, and temperature inversions and other weather or atmospheric conditions causing anomalous propagation on the ground. Three particular issues particularly manifest the cliff effect. Firstly, anomalous conditions will cause occasional signal degradation. Secondly, if one is located in a fringe area, where the antenna is just barely strong enough to receive the signal, then usual variation in signal quality will cause relatively frequent signal degradation, and a very small change in overall signal quality can have a dramatic impact on the frequency of signal degradation – one incident per hour (not significantly affecting watchability) versus problems every few seconds or continuous problems. Thirdly, in some cases, where the sign
https://en.wikipedia.org/wiki/Therapeutic%20angiogenesis
Therapeutic angiogenesis is an experimental area in the treatment of ischemia, the condition associated with decrease in blood supply to certain organs, tissues, or body parts. This is usually caused by constriction or obstruction of the blood vessels. Angiogenesis is the natural healing process by which new blood vessels are formed to supply the organ or part in deficit with oxygen-rich blood. The goal of therapeutic angiogenesis is to stimulate the creation of new blood vessels in ischemic organs, tissues, or parts with the hope of increasing the level of oxygen-rich blood reaching these areas. See also Vascular endothelial growth factor References 1. Isner JM. Therapeutic angiogenesis: a new frontier for vascular therapy. Vasc Med. 1996 1: 79–87. 2. Ferrara N, Kerbel RS. Angiogenesis as a therapeutic target. Nature. 2005 Dec 15;438(7070): 967–74. 3. Losordo DW, Dimmeler S. Therapeutic angiogenesis and vasculogenesis for ischemic disease. Part I: angiogenic cytokines. Circulation. 2004 109: 2487-2491 4. Cao L, Mooney DJ. Spatiotemporal control over growth factor signaling for therapeutic neovascularization. Adv Drug Deliv Rev. 2007 Nov 10;59(13):1340-50. Angiogenesis Vascular procedures
https://en.wikipedia.org/wiki/C-list%20%28computer%20security%29
In capability-based computer security, a C-list is an array of capabilities, usually associated with a process and maintained by the kernel. The program running in the process does not manipulate capabilities directly, but refers to them via C-list indexes—integers indexing into the C-list. The file descriptor table in Unix is an example of a C-list. Unix processes do not manipulate file descriptors directly, but refer to them via file descriptor numbers, which are C-list indexes. In the KeyKOS and EROS operating systems, a process's capability registers constitute a C-list. See also Access-control list References Arrays Operating system security
https://en.wikipedia.org/wiki/BTRON
BTRON (Business TRON) is one of the subprojects of the TRON Project proposed by Ken Sakamura, which is responsible for the business phase. It refers to the operating systems (OS), keyboards, peripheral interface specifications, and other items related to personal computers (PCs) that were developed there. Originally, it refers to specifications rather than specific products, but in reality, the term "BTRON" is often used to refer to implementations. Currently, Personal Media Corporation's B-right/V is an implementation of BTRON3, and a software product called "" that includes it has been released. Specifications As with other TRON systems, only the specification of BTRON has been formulated, and the implementation method is not specified. Implementation is mentioned in this section to the extent necessary to explain the specification, but please refer to the Implementation section for details. BTRON1, BTRON2, BTRON3 The BTRON project began with Matsushita Electric Industrial and Personal Media prototyping "BTRON286," an implementation on a 16-bit CPU 286 for the CEC machine described below. BTRON1 specifications include the BTRON1 Programming Standard Handbook, which describes the OS API, and the BTRON1 Specification Software Specification. which describes the OS API. BTRON2 is planned to be implemented on , and only the specification has been created and published. It is planned to be implemented on evaluation machines equipped with TRON chips made by Fujitsu and named "2B". One of its features is that all OS-managed computing resources such as memory, processes, and threads are handled in a real/pseudomorphic model, a feature of BTRON. SIGBTRON's TRON chip machine MCUBE implemented "3B," which is 32-bit and uses an ITRON-specification RTOS (modified from "ItIs") for the microkernel. 3B and The B-right specification used in , etc. is "BTRON3" (currently, the microkernel is I-right); the specification that B-right/V conforms to is published as the BTRON3 spec
https://en.wikipedia.org/wiki/Solar%20transit
In astronomy, a solar transit is a movement of any object passing between the Sun and the Earth. This includes the planets Mercury and Venus (see Transit of Mercury and Transit of Venus). A solar eclipse is also a solar transit of the Moon, but technically only if it does not cover the entire disc of the Sun (an annular eclipse), as "transit" counts only objects that are smaller than what they are passing in front of. Solar transit is only one of several types of astronomical transit A solar transit (also called a solar outage, sometimes solar fade, sun outage, or sun fade) also occurs to communications satellites, which pass in front of the Sun for several minutes each day for several days straight for a period in the months around the equinoxes, the exact dates depending on where the satellite is in the sky relative to its earth station. Because the Sun also produces a great deal of microwave radiation in addition to sunlight, it overwhelms the microwave radio signals coming from the satellite's transponders. This enormous electromagnetic interference causes interruptions in fixed satellite services that use satellite dishes, including TV networks and radio networks, as well as VSAT and DBS. Only downlinks from the satellite are affected, uplinks from the Earth are normally not, as the planet "shades" the Earth station when viewed from the satellite. Satellites in geosynchronous orbit are irregularly affected based on their inclination. Reception from satellites in other orbits are frequently but only momentarily affected by this, and by their nature the same signal is usually repeated or relayed on another satellite, if a tracking dish is used at all. Satellite radio and other services like GPS are not affected, as they use no receiving dish, and therefore do not concentrate the interference. (GPS and certain satellite radio systems use non-geosynchronous satellites.) Solar transit begins with only a brief degradation in signal quality for a few moments.
https://en.wikipedia.org/wiki/The%20Fairylogue%20and%20Radio-Plays
The Fairylogue and Radio-Plays was an early attempt to bring L. Frank Baum's Oz books to the motion picture screen. It was a mixture of live actors, hand-tinted magic lantern slides, and film. Baum himself would appear as if he were giving a lecture, while he interacted with the characters (both on stage and on screen). Although acclaimed throughout its tour, the show experienced budgetary problems (with the show costing more to produce than the money that sold-out houses could bring in) and folded after two months of performances. It opened in Grand Rapids, Michigan on September 24, 1908. It then opened in Orchestra Hall in Chicago on October 1, toured the country and ended its run in New York City. There, it was scheduled to run through December 31, and ads for it continued to run in The New York Times until then, but it reportedly closed on December 16. After First National Pictures acquired Selig Polyscope, the film was re-released on September 24, 1925. Although today seen mostly as a failed first effort to adapt the Oz books, The Fairylogue and Radio-Plays is notable in film history because it contains the earliest original film score to be documented. The film is lost, but the script for Baum's narration and production stills survive. Michael Radio Color The films were colored (credited as "illuminations") by Duval Frères of Paris, in a process known as "Radio-Play", and were noted for being the most lifelike hand-tinted imagery of the time. Baum once claimed in an interview that a "Michael Radio" was a Frenchman who colored the films, though no evidence of such a person, even with the more proper French spelling "Michel", as second-hand reports unsurprisingly revise it, has been documented. It did not refer to the contemporary concept of radio (or, for that matter, a radio play), but played on notions of the new and fantastic at the time, similar to the way "high-tech" or sometimes "cyber" would be used later in the century. The "Fairylogue" part of
https://en.wikipedia.org/wiki/Mock%20modular%20form
In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, and a mock theta function is essentially a mock modular form of weight . The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his lost notebook. Sander Zwegers discovered that adding certain non-holomorphic functions to them turns them into harmonic weak Maass forms. History Ramanujan's 12 January 1920 letter to Hardy listed 17 examples of functions that he called mock theta functions, and his lost notebook contained several more examples. (Ramanujan used the term "theta function" for what today would be called a modular form.) Ramanujan pointed out that they have an asymptotic expansion at the cusps, similar to that of modular forms of weight , possibly with poles at cusps, but cannot be expressed in terms of "ordinary" theta functions. He called functions with similar properties "mock theta functions". Zwegers later discovered the connection of the mock theta function with weak Maass forms. Ramanujan associated an order to his mock theta functions, which was not clearly defined. Before the work of Zwegers, the orders of known mock theta functions included 3, 5, 6, 7, 8, 10. Ramanujan's notion of order later turned out to correspond to the conductor of the Nebentypus character of the weight harmonic Maass forms which admit Ramanujan's mock theta functions as their holomorphic projections. In the next few decades, Ramanujan's mock theta functions were studied by Watson, Andrews, Selberg, Hickerson, Choi, McIntosh, and others, who proved Ramanujan's statements about them and found several more examples and identities. (Most of the "new" identities and examples were already known to Ramanujan and reappeared in his lost notebook.) In 1936, Watson found that under the action of elements of the modular group, the order 3 mock theta functions almost transform like modular forms of weight (multiplied by
https://en.wikipedia.org/wiki/Digital%20delay%20generator
A digital delay generator (also known as digital-to-time converter) is a piece of electronic test equipment that provides precise delays for triggering, syncing, delaying, and gating events. These generators are used in many experiments, controls, and processes where electronic timing of a single event or multiple events to a standard timing reference is needed. The digital delay generator may initiate a sequence of events or be triggered by an event. What differentiates it from ordinary electronic timing is the synchronicity of its outputs to each other and the initiating event. A time-to-digital converter does the inverse function. Equipment The digital delay generator is similar to a pulse generator in function, but the timing resolution is much finer, and the delay and width jitter much less. Some manufacturers, calling their units "digital delay and pulse generators", have added independent amplitude polarity and level control to each of their outputs in addition to delay and width control. Now each channel provides its delay, width, and amplitude control, with the triggering synchronized to an external source or internal rep rate generator - like a general-purpose pulse generator. Some delay generators provide precise delays (edges) to trigger devices. Others provide accurate delays and widths also to allow a gating function. Some delay generators provide a single timing channel, while others provide multiple timing channels. Digital delay generator outputs are typically logic level, but some offer higher voltages to cope with electromagnetic interference environments. For very harsh environments, optical outputs and/or inputs with fiber optic connectors are also offered as options by some manufacturers. In general, a delay generator operates in a 50 Ω transmission line environment with the line terminated in its characteristic impedance to minimize reflections and timing ambiguities. Historically, digital delay generators were single channel devices wit
https://en.wikipedia.org/wiki/Pointcheval%E2%80%93Stern%20signature%20algorithm
In cryptography, the Pointcheval–Stern signature algorithm is a digital signature scheme based on the closely related ElGamal signature scheme. It changes the ElGamal scheme slightly to produce an algorithm which has been proven secure in a strong sense against adaptive chosen-message attacks, assuming the discrete logarithm problem is intractable in a strong sense. David Pointcheval and Jacques Stern developed the forking lemma technique in constructing their proof for this algorithm. It has been used in other security investigations of various cryptographic algorithms. References Digital signature schemes Public-key cryptography
https://en.wikipedia.org/wiki/Bourbaki%20dangerous%20bend%20symbol
The dangerous bend or caution symbol ☡ () was created by the Nicolas Bourbaki group of mathematicians and appears in the margins of mathematics books written by the group. It resembles a road sign that indicates a "dangerous bend" in the road ahead, and is used to mark passages tricky on a first reading or with an especially difficult argument. Variations Others have used variations of the symbol in their books. The computer scientist Donald Knuth introduced an American-style road-sign depiction in his Metafont and TeX systems, with a pair of adjacent signs indicating doubly dangerous passages. Typography In the LaTeX typesetting system, Knuth's dangerous bend symbol can be produced by first loading the font manfnt (a font with extra symbols used in Knuth's TeX manual) with \usepackage{manfnt} and then typing \dbend There are several variations given by \lhdbend, \reversedvideodbend, \textdbend, \textlhdbend, and \textreversedvideodbend. See also Halmos box References External links Knuth's use of the dangerous bend sign. Public domain GIF files. Latex style file to provide a "danger" environment marked by a dangerous bend sign, based on Knuth's book. Mathematical symbols
https://en.wikipedia.org/wiki/Straight-nine%20engine
The straight-nine engine (also referred to as an inline-nine engine; abbreviated I9 or L9) is a piston engine with nine cylinders arranged in a straight line along the crankshaft. The most common application is for large diesel engines used by ships. Examples of straight-nine engines include: Rolls-Royce Bergen B, C and K series Wärtsilä RT-flex60C-B, RT-flex82C, RTA84T-D, RTA84C, RTA96C, 20, 26, 32, Wasa32LN, 38, 46 and 46F series Straight-09 Nine-cylinder engines 09
https://en.wikipedia.org/wiki/Series%20of%20tubes
"A series of tubes" is a phrase used originally as an analogy by then-United States Senator Ted Stevens (R-Alaska) to describe the Internet in the context of opposing network neutrality. On June 28, 2006, he used this metaphor to criticize a proposed amendment to a committee bill. The amendment would have prohibited Internet service providers such as AT&T, Comcast, Time Warner Cable and Verizon Communications from charging fees to give some companies' data a higher priority in relation to other traffic. The metaphor was widely ridiculed, because Stevens was perceived to have displayed an extremely limited understanding of the Internet, despite his leading the Senate committee responsible for regulating it. Partial text of Stevens's comments Media commentary On June 28, 2006, Public Knowledge government affairs manager Alex Curtis wrote a brief blog entry introducing the senator's speech and posted an MP3 recording. The next day, the Wired magazine blog 27B Stroke 6 featured a lengthier post by Ryan Singel, which included Singel's transcriptions of some parts of Stevens's speech considered the most humorous. Within days, thousands of other blogs and message boards posted the story. Most writers and commentators derisively cited several of Stevens's misunderstandings of Internet technology, arguing that the speech showed that he had formed a strong opinion on a topic which he understood poorly (e.g., referring to an e-mail message as "an Internet," and blaming bandwidth issues for an e-mail problem much more likely to be caused by mail server or routing issues). The story sparked mainstream media attention, including a mention in The New York Times. The technology podcast This Week in Tech also discussed the incident. According to The Wall Street Journal, as summarized by MediaPost commentator Ross Fadner, "'The Internet is a Series of Tubes!' spawned a new slogan that became a rallying cry for Net neutrality advocates. ... Stevens's overly simplistic description
https://en.wikipedia.org/wiki/Indeterminate%20system
In mathematics, particularly in algebra, an indeterminate system is a system of simultaneous equations (e.g., linear equations) which has more than one solution (sometimes infinitely many solutions). In the case of a linear system, the system may be said to be underspecified, in which case the presence of more than one solution would imply an infinite number of solutions (since the system would be describable in terms of at least one free variable), but that property does not extend to nonlinear systems (e.g., the system with the equation ). An indeterminate system by definition is consistent, in the sense of having at least one solution. For a system of linear equations, the number of equations in an indeterminate system could be the same as the number of unknowns, less than the number of unknowns (an underdetermined system), or greater than the number of unknowns (an overdetermined system). Conversely, any of those three cases may or may not be indeterminate. Examples The following examples of indeterminate systems of equations have respectively, fewer equations than, as many equations as, and more equations than unknowns: Conditions giving rise to indeterminacy In linear systems, indeterminacy occurs if and only if the number of independent equations (the rank of the augmented matrix of the system) is less than the number of unknowns and is the same as the rank of the coefficient matrix. For if there are at least as many independent equations as unknowns, that will eliminate any stretches of overlap of the equations' surfaces in the geometric space of the unknowns (aside from possibly a single point), which in turn excludes the possibility of having more than one solution. On the other hand, if the rank of the augmented matrix exceeds (necessarily by one, if at all) the rank of the coefficient matrix, then the equations will jointly contradict each other, which excludes the possibility of having any solution. Finding the solution set of an indeterminate lin
https://en.wikipedia.org/wiki/Independent%20equation
An independent equation is an equation in a system of simultaneous equations which cannot be derived algebraically from the other equations. The concept typically arises in the context of linear equations. If it is possible to duplicate one of the equations in a system by multiplying each of the other equations by some number (potentially a different number for each equation) and summing the resulting equations, then that equation is dependent on the others. But if this is not possible, then that equation is independent of the others. If an equation is independent of the other equations in its system, then it provides information beyond that which is provided by the other equations. In contrast, if an equation is dependent on the others, then it provides no information not contained in the others collectively, and the equation can be dropped from the system without any information loss. The number of independent equations in a system equals the rank of the augmented matrix of the system—the system's coefficient matrix with one additional column appended, that column being the column vector of constants. The number of independent equations in a system of consistent equations (a system that has at least one solution) can never be greater than the number of unknowns. Equivalently, if a system has more independent equations than unknowns, it is inconsistent and has no solutions. See also Linear algebra Indeterminate system Independent variable References Linear algebra
https://en.wikipedia.org/wiki/Bellard%27s%20formula
Bellard's formula is used to calculate the nth digit of π in base 16. Bellard's formula was discovered by Fabrice Bellard in 1997. It is about 43% faster than the Bailey–Borwein–Plouffe formula (discovered in 1995). It has been used in PiHex, the now-completed distributed computing project. One important application is verifying computations of all digits of pi performed by other means. Rather than having to compute all of the digits twice by two separate algorithms to ensure that a computation is correct, the final digits of a very long all-digits computation can be verified by the much faster Bellard's formula. Formula: Notes External links Fabrice Bellard's PI page PiHex web site David Bailey, Peter Borwein, and Simon Plouffe's BBP formula (On the rapid computation of various polylogarithmic constants) (PDF) Distributed computing projects Pi algorithms 1997 introductions
https://en.wikipedia.org/wiki/Unit%20construction
For the vehicle design where the vehicle's skin is used as a load-bearing element, see Monocoque. Unit construction is the design of larger motorcycles where the engine and gearbox components share a single casing. This sometimes includes the design of automobile engines and was often loosely applied to motorcycles with rather different internal layouts such as the flat twin BMW models. Prior to unit construction, the engine and gearbox had separate casings and were connected by a primary chain drive running in an oil bath chaincase. The new system used a similar chain drive and both had separate oil reservoirs for engine, gearbox and primary drive. Triumph and BSA were already using cast non-ferrous alloy chaincases and started converting to unit construction in the 1950s. A driving factor behind the BSA/Triumph change was that Lucas had declared an intention to abandon production of motorcycle dynamos and magnetos, and instead produce only alternators. By contrast, Velocette, Matchless/AJS, and Norton motorcycles continued to be pre-unit construction (the former machines with pressed-steel primary cases) until the end of production in the 1960s and 1970s respectively. In reality, the casings were not really "unitary," as the crankcase section was vertically divided in the middle and no oil was shared between the three portions. In the 1960s Japanese motorcycles introduced the now-familiar horizontally split clamshell which has become almost universal. Modern horizontally split four stroke engines invariably use single oil reservoir (whether wet- or dry-sump) but, while this simplifies matters, it is arguable that the previous system of having different types of oil for engine and gearbox is preferable. The BMC Mini was an early example of a car with the "gearbox-in-the-sump;" but this practice of using a single oil reservoir, which has become the norm for motorbikes, is generally undesirable for cars and trucks. Two stroke "total-loss" bikes always have se
https://en.wikipedia.org/wiki/Local%20shared%20object
A local shared object (LSO), commonly called a Flash cookie (due to its similarity with an HTTP cookie), is a piece of data that websites that use Adobe Flash may store on a user's computer. Local shared objects have been used by all versions of Flash Player (developed by Macromedia, which was later acquired by Adobe Systems) since version 6. Flash cookies, which can be stored or retrieved whenever a user accesses a page containing a Flash application, are a form of local storage. Similar to cookies, they can be used to store user preferences, save data from Flash games, or track users' Internet activity. LSOs have been criticised as a breach of browser security, but there are now browser settings and addons to limit the duration of their storage. Storage Local shared objects contain data stored by individual websites. Data is stored in the Action Message Format. With the default settings, the Flash Player does not seek the user's permission to store local shared objects on the hard disk. By default, an SWF application running in Flash Player from version 9 to 11 (as of Sept 1, 2011) may store up to of data to the user's hard drive. If the application attempts to store more, a dialog asks the user whether to allow or deny the request. Adobe Flash Player does not allow third-party local shared objects to be shared across domains. For example, a local shared object from "www.example.com" cannot be read by the domain "www.example.net". However, the first-party website can always pass data to a third-party via some settings found in the dedicated XML file and passing the data in the request to the third party. Also, third-party LSOs are allowed to store data by default. By default, LSO data is shared across browsers on the same machine. As an example: A visitor accesses a site using their Firefox browser, then views a page displaying a specific product, then closes the Firefox browser, the information about that product can be stored in the LSO. If that same v
https://en.wikipedia.org/wiki/Vector%20notation
In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For representing a vector, the common typographic convention is lower case, upright boldface type, as in . The International Organization for Standardization (ISO) recommends either bold italic serif, as in , or non-bold italic serif accented by a right arrow, as in . In advanced mathematics, vectors are often represented in a simple italic type, like any variable. History In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments which resulted in the concept of a vector as an equivalence class of such segments. The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis. In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell. In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the Bulletin of the Quaternion Society. Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication. Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians wer
https://en.wikipedia.org/wiki/Amazon%20S3
Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services (AWS) that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its e-commerce network. Amazon S3 can store any type of object, which allows uses like storage for Internet applications, backups, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage. AWS launched Amazon S3 in the United States on March 14, 2006, then in Europe in November 2007. Technical details Design Amazon S3 manages data with an object storage architecture which aims to provide scalability, high availability, and low latency with high durability. The basic storage units of Amazon S3 are objects which are organized into buckets. Each object is identified by a unique, user-assigned key. Buckets can be managed using the console provided by Amazon S3, programmatically with the AWS SDK, or the REST application programming interface. Objects can be up to five terabytes in size. Requests are authorized using an access control list associated with each object bucket and support versioning which is disabled by default. Since buckets are typically the size of an entire file system mount in other systems, this access control scheme is very coarse-grained. In other words, unique access controls cannot be associated with individual files. Amazon S3 can be used to replace static web-hosting infrastructure with HTTP client-accessible objects, index document support, and error document support. The Amazon AWS authentication mechanism allows the creation of authenticated URLs, valid for a specified amount of time. Every item in a bucket can also be served as a BitTorrent feed. The Amazon S3 store can act as a seed host for a torrent and any BitTorrent client can retrieve the file. This can drastically reduce the bandwidth cost for the download of popular objects. A bucket can be configured to save HTTP log i
https://en.wikipedia.org/wiki/Trace%20fossil%20classification
Trace fossils are classified in various ways for different purposes. Traces can be classified taxonomically (by morphology), ethologically (by behavior), and toponomically, that is, according to their relationship to the surrounding sedimentary layers. Except in the rare cases where the original maker of a trace fossil can be identified with confidence, phylogenetic classification of trace fossils is an unreasonable proposition. Taxonomic classification The taxonomic classification of trace fossils parallels the taxonomic classification of organisms under the International Code of Zoological Nomenclature. In trace fossil nomenclature a Latin binomial name is used, just as in animal and plant taxonomy, with a genus and specific epithet. However, the binomial names are not linked to an organism, but rather just a trace fossil. This is due to the rarity of association between a trace fossil and a specific organism or group of organisms. Trace fossils are therefore included in an ichnotaxon separate from Linnaean taxonomy. When referring to trace fossils, the terms ichnogenus and ichnospecies parallel genus and species respectively. The most promising cases of phylogenetic classification are those in which similar trace fossils show details complex enough to deduce the makers, such as bryozoan borings, large trilobite trace fossils such as Cruziana, and vertebrate footprints. However, most trace fossils lack sufficiently complex details to allow such classification. Ethologic classification The Seilacherian System Adolf Seilacher was the first to propose a broadly accepted ethological basis for trace fossil classification. He recognized that most trace fossils are created by animals in one of five main behavioural activities, and named them accordingly: Cubichnia are the traces of organisms left on the surface of a soft sediment. This behaviour may simply be resting as in the case of a starfish, but might also evidence the hiding place of prey, or even the ambus
https://en.wikipedia.org/wiki/European%20Conference%20on%20Wireless%20Sensor%20Networks
The European Conference on Wireless Sensor Networks (EWSN) is an annual academic conference on wireless sensor networks. Although there is no official ranking of academic conferences on wireless sensor networks, EWSN is widely regarded as the top European event in sensor networks. EWSN Events EWSN started in year 2004: EWSN 2015, Porto, Portugal, February 9–11, 2015 EWSN 2014, Oxford, UK, February 17–19, 2014 EWSN 2013 , Ghent, Belgium, February 13–15, 2013 EWSN 2012, Trento, Italy, February 15–17, 2012 EWSN 2011, Bonn, Germany, February 23.25, 2011 EWSN 2010, Coimbra, Portugal, February 17–19, 2010 EWSN 2009, Cork, Ireland, February 11–13, 2009 EWSN 2008, Bologna, Italy, January 30–31, February 1, 2008 EWSN 2007, Delft, The Netherlands, January 29–31, 2007 EWSN 2006, Zurich, Switzerland, February 13–15, 2006 EWSN 2005, Istanbul, Turkey, January 31 - February 2, 2005 EWSN 2004, Berlin, Germany, January 19–21, 2004 History EWSN started in year 2004 and the prime motivation behind EWSN was to provide the European researchers working in sensor networks a venue to disseminate their research results. However, over the years EWSN has grown into a truly International event with participants and authors coming from all over the world. In 2006 it was decided to silently upgrade the event from a workshop to a conference. With this change in effect the acronym (i.e. EWSN) remains the same. Therefore, when giving a reference to EWSN 2004 to 2006 use European Workshop on Wireless Sensor Networks, and when giving a reference to EWSN 2007 onwards use European Conference on Wireless Sensor Networks. See also Wireless sensor network External links EWSN Bibliography (from DBLP) Wireless sensor network Computer networking conferences
https://en.wikipedia.org/wiki/144%2C000
144,000 is a natural number. It has significance in various religious movements and ancient prophetic belief systems. Religion Christianity Book of Revelation The number 144,000 appears three times in the Book of Revelation: Revelation 7:3–8: Revelation 14:1: Revelation 14:3–5: The numbers 12,000 and 144,000 are variously interpreted in traditional Christianity. Some, taking the numbers in Revelation to be symbolic, believe it represents all of God's people throughout history in the heavenly Church. One suggestion is that the number comes from 12, a symbol for totality, which is squared and multiplied by one thousand for more emphasis. Others insist the numbers 12,000 and 144,000 are literal numbers and representing either descendants of Jacob (also called Israel in the Bible) or others to whom God has given a superior destiny with a distinct role at the time of the end of the world. One understanding is that the 144,000 are recently converted Jewish evangelists sent out to bring sinners to Jesus Christ during the seven year tribulation period. Preterists believe they are Jewish Christians, sealed for deliverance from the destruction of Jerusalem in 70 A.D. Dispensationalist Tim LaHaye, in his commentary Revelation: Illustrated and Made Plain (Zondervan, 1975), considers the 144,000 in Revelation 7 to refer to Jews and those in Revelation 14 to refer to Christians. Jehovah's Witnesses Jehovah's Witnesses believe that exactly 144,000 faithful Christians from Pentecost of 33 AD until the present day will be resurrected to heaven as immortal spirit beings to spend eternity with God and Christ. They believe that these people are "anointed" by God to become part of the spiritual "Israel of God". They believe the 144,000 (which they consider to be synonymous with the "little flock" of Luke 12:32) will serve with Christ as king-priests for a thousand years, while all other people accepted by God (the "other sheep" of John 10:16, composed of "the great crowd" o
https://en.wikipedia.org/wiki/Conference%20on%20Embedded%20Networked%20Sensor%20Systems
SenSys, the ACM Conference on Embedded Networked Sensor Systems, is an annual academic conference in the area of embedded networked sensors. About SenSys ACM SenSys is a selective, single-track forum for the presentation of research results on systems issues in the area of embedded networked sensors. The conference provides a venue to address the research challenges facing the design, deployment, use, and fundamental limits of these systems. Sensor networks require contributions from many fields, from wireless communication and networking, embedded systems and hardware, distributed systems, data management, and applications. SenSys welcomes cross-disciplinary work. Ranking Although there is no official ranking of academic conferences on wireless sensor networks, SenSys is widely regarded by researchers as one of the two (along with IPSN) most prestigious conferences focusing on sensor network research. SenSys focuses more on system issues while IPSN on algorithmic and theoretical considerations. The acceptance rate for 2017 was 17.2% (26 out of 151 papers accepted for publication). SenSys Events SenSys started in year 2003 and following is a list of SenSys events from 2003 to 2017: SenSys 2017, Delft, The Netherlands, November 5–8, 2017 SenSys 2016, Stanford, CA, USA, November 14–16, 2016 SenSys 2015, Seoul, South Korea, November 1–4, 2015 SenSys 2014, Memphis, Tennessee, USA, November 3–6, 2014 SenSys 2013, Rome, Italy, November 11–14, 2013 SenSys 2012, Toronto, Canada, November 6–9, 2012 SenSys 2011, Seattle, WA, USA, November 1–4, 2011 SenSys 2010, Zurich, Switzerland, November 3–5, 2010 SenSys 2009, Berkeley, California, USA, November 4–6, 2009 SenSys 2008, Raleigh, North Carolina, USA, November 5–7, 2008 SenSys 2007, Sydney, Australia, November 6–9, 2007 SenSys 2006, Boulder, Colorado, USA, November 1–3, 2006 SenSys 2005, San Diego, CA, USA, November 2–4, 2005 SenSys 2004, Baltimore, MD, USA, November 3–5, 2004 SenSys 2003, Los Angeles, Calif
https://en.wikipedia.org/wiki/List%20of%20wastewater%20treatment%20technologies
This page consists of a list of wastewater treatment technologies: See also Agricultural wastewater treatment Industrial wastewater treatment List of solid waste treatment technologies Waste treatment technologies Water purification Sewage sludge treatment References Industrial Wastewater Treatment Technology Database EPA. Chemical processes Environmental engineering List Water pollution Water technology Waste-water treatment technologies Sanitation
https://en.wikipedia.org/wiki/Hacker%20II%3A%20The%20Doomsday%20Papers
Hacker II: The Doomsday Papers is computer game written by Steve Cartwright and published by Activision in 1986. It is the sequel to the 1985 game Hacker. Hacker II was released for the Amiga, Apple II, Apple IIGS, Amstrad CPC, Atari ST, Commodore 64, IBM PC, Macintosh, and ZX Spectrum. Plot Hacker II is more difficult and involved than the first game. In Hacker II, the player is actually recruited based upon their (assumed) success with the activities in the original game. Once again, they are tasked with controlling a robot, this time to infiltrate a secure facility in order to retrieve documents known only as "The Doomsday Papers" from a well guarded vault to ensure the security of the United States. Eventually, as they escape with the papers, the player is confronted by agents of the United States who reveal that they have actually been working for a former Magma employee, who wanted the papers in revenge for what had happened to the company the player had presumably exposed in the first game. The building that the player had unwittingly broken into was a government facility. The player then has to go back into the facility as part of a gambit to expose the Magma agent, avoiding the same security that had threatened the player before. Gameplay Gameplay is considerably changed from the previous game, and the packaging notably includes a "manual" describing the function of a four-way monitor system provided to the player. It is hooked into the camera security network of the facility the player is asked to infiltrate. A handful of robots are available, hidden in the facility, in case some are lost. By using the camera system and in-game map that helps track guard patrols and the location of the robots, the player must explore the one floor facility and find the codes needed to open the vault and escape with the papers. To aid the player there is also a pre-recorded security tape of a typical day for every camera in the facility, which the player can bypass t
https://en.wikipedia.org/wiki/Aperture%20%28computer%20memory%29
In computing, an aperture is a portion of physical address space (i.e. physical memory) that is associated with a particular peripheral device or a memory unit. Apertures may reach external devices such as ROM or RAM chips, or internal memory on the CPU itself. Typically, a memory device attached to a computer accepts addresses starting at zero, and so a system with more than one such device would have ambiguous addressing. To resolve this, the memory logic will contain several aperture selectors, each containing a range selector and an interface to one of the memory devices. The set of selector address ranges of the apertures are disjoint. When the CPU presents a physical address within the range recognized by an aperture, the aperture unit routes the request (with the address remapped to a zero base) to the attached device. Thus, apertures form a layer of address translation below the level of the usual virtual-to-physical mapping. See also Address bus AGP aperture Memory-mapped I/O External links Flash Memory Solutions Computer memory Computer architecture
https://en.wikipedia.org/wiki/GNU%20Bazaar
GNU Bazaar (formerly Bazaar-NG, command line tool bzr) is a distributed and client–server revision control system sponsored by Canonical. Bazaar can be used by a single developer working on multiple branches of local content, or by teams collaborating across a network. Bazaar is written in the Python programming language, with packages for major Linux distributions, and Microsoft Windows. Bazaar is free software and part of the GNU Project. Features Bazaar commands are similar to those found in CVS or Subversion. A new project can be started and maintained without a remote repository server by invoking bzr init in a directory which a person wishes to version. In contrast to purely distributed version control systems which do not use a central server, Bazaar supports working with or without a central server. It is possible to use both methods at the same time with the same project. The websites Launchpad and SourceForge provide free hosting service for projects managed with Bazaar. Bazaar has support for working with some other revision control systems. This allows users to branch from another system (such as Subversion), make local changes and commit them into a Bazaar branch, and then later merge them back into the other system. Read-only access is also available for Git and Mercurial. Bazaar also allows for interoperation with many other systems (including CVS, Darcs, Git, Perforce, Mercurial) by allowing one to import/export the history. Bazaar supports files with names from the complete Unicode set. It also allows commit messages, committer names, etc. to be in Unicode. History Baz: an earlier Canonical version control system The name "Bazaar" was originally used by a fork of the GNU arch client tla. This fork is called Baz to distinguish it from the current Bazaar software. Baz was announced in October 2004 by Canonical employee Robert Collins and maintained until 2005, when the project then called Bazaar-NG (the present Bazaar) was announced a
https://en.wikipedia.org/wiki/Physical-to-Virtual
In computing. Physical-to-Virtual ("P2V" or "p-to-v") involves the process of decoupling and migrating a physical server's operating system (OS), applications, and data from that physical server to a virtual-machine guest hosted on a virtualized platform. Methods of P2V migration Manual P2V User manually creates a virtual machine in a virtual host environment and copies all the files from OS, applications and data from the source machine. Semi-automated P2V Performing a P2V migration using a tool that assists the user in moving the servers from physical state to virtual machine. Microsoft's Virtual Server 2005 Migration Toolkit, HOWTO: Guideline for use Virtual Server Migration ToolKit (KB555306) VMware provides a semi-automated tool called VMware vCenter Converter for moving physical servers running Windows or Linux into virtual environments while they are powered on. VMware vCenter Converter replaces two older utilities: Importer (bundled with VMware Workstation) and P2V Assistant. Oracle's Virtual Box has a Linux-based tool which allows the conversion of a dd image of an existing hard drive Microsoft provides the SysInternals disk2vhd utility for making images from Windows XP or later systems to be used with Windows Virtual PC, Microsoft Virtual Server or Hyper-V. openQRM, an open-source datacenter management platform, does P2V (and V2P, V2V or P2P). Storix's bare-metal recovery product (System Backup Administrator) provides P2V and V2P capabilities for Linux, AIX, and Solaris. Fully automated P2V Performing a P2V migration using a tool that migrates the server over the network without any assistance from the user. Veritas Backup Exec has Physical to Virtual (P2V) conversion (and V2P) feature build into the backup engine which can be used for migrations or instant recovery vContinuum vContinuum by InMage systems is an automated P2V data protection/migration tool Symantec System Recovery enables fast, automated P2V and V2P conversions Quest vConverte
https://en.wikipedia.org/wiki/Position%20tolerance
Position Tolerance (symbol: ⌖) is a geometric dimensioning and tolerancing (GD&T) location control used on engineering drawings to specify desired location, as well as allowed deviation to the position of a feature on a part. Position tolerance must only be applied to features of size, which requires that the feature have at least two opposable points. See also Circle Miscellaneous Technical Technical drawing
https://en.wikipedia.org/wiki/POWER2
The POWER2, originally named RIOS2, is a processor designed by IBM that implemented the POWER instruction set architecture. The POWER2 was the successor of the POWER1, debuting in September 1993 within IBM's RS/6000 systems. When introduced, the POWER2 was the fastest microprocessor, surpassing the Alpha 21064. When the Alpha 21064A was introduced in 1993, the POWER2 lost the lead and became second. IBM claimed that the performance for a 62.5 MHz POWER2 was 73.3 SPECint92 and 134.6 SPECfp92. The open source GCC compiler removed support for POWER1 (RIOS) and POWER2 (RIOS2) in the 4.5 release. Description Improvements over the POWER1 included enhancements to the POWER instruction set architecture (consisting of new user and system instructions and other system-related features), higher clock rates (55 to 71.5 MHz), an extra fixed point unit and floating point unit, a larger 32 KB instruction cache, and a larger 128 or 256 KB data cache. The POWER2 was a multi-chip design consisting of six or eight semi-custom integrated circuits, depending on the amount of data cache (the 256 KB configuration required eight chips). The partitioning of the design was identical to that of the POWER1: an instruction cache unit chip, a fixed-point unit chip, a floating-point unit chip, a storage control unit chip, and two or four data cache unit chips. The eight-chip configuration contains a total of 23 million transistors and a total die area of 1,215 mm2. The chips are manufactured by IBM in its 0.72 μm CMOS process, which features a 0.45 μm effective channel length; and one layer of polysilicon and four layers of metal interconnect. The chips are packaged in a ceramic multi-chip module (MCM) that measures 64 mm by 64 mm. POWER2+ An improved version of the POWER2 optimized for transaction processing was introduced in May 1994 as the POWER2+. Transaction processing workloads benefited from the addition of a L2 cache with capacities of 512 KB, 1 MB and 2 MB. This cache was implement
https://en.wikipedia.org/wiki/Singular%20control
In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows. The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control , i.e., is of the form: and the control is restricted to being between an upper and a lower bound: . To minimize , we need to make as big or as small as possible, depending on the sign of , specifically: If is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from to at times when switches from negative to positive. The case when remains at zero for a finite length of time is called the singular control case. Between and the maximization of the Hamiltonian with respect to gives us no useful information and the solution in that time interval is going to have to be found from other considerations. (One approach would be to repeatedly differentiate with respect to time until the control u again explicitly appears, though this is not guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between and the control is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc, if it is optimal, will satisfy the Kelley condition: Others refer to this condition as the generalized Legendre–Clebsch condition. The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion. References External links Control theory Optimal control
https://en.wikipedia.org/wiki/Open-source%20video%20game
An open-source video game, or simply an open-source game, is a video game whose source code is open-source. They are often freely distributable and sometimes cross-platform compatible. Definition and differentiation Not all open-source games are free software; some open-source games contain proprietary non-free content. Open-source games that are free software and contain exclusively free content conform to DFSG, free culture, and open content and are sometimes called free games. Many Linux distributions require for inclusion that the game content is freely redistributable, freeware or commercial restriction clauses are prohibited. Background In general, open-source games are developed by relatively small groups of people in their free time, with profit not being the main focus. Many open-source games are volunteer-run projects, and as such, developers of free games are often hobbyists and enthusiasts. The consequence of this is that open-source games often take longer to mature, are less common and often lack the production value of commercial titles. In the 1900s a challenge to build high-quality content for games was the missing availability or the excessive price for tools like 3D modeller or toolsets for level design. In recent years, this changed and availability of open-source tools like Blender, game engines and libraries drove open source and independent video gaming. FLOSS game engines, like the Godot game engine, as well as libraries, like SDL, are increasingly common in game development, even proprietary ones. Given that game art is not considered software, there is debate about the philosophical or ethical obstacles in selling a game where its art is proprietary but the entire source code is free software. Some of the open-source game projects are based on formerly proprietary games, whose source code was released as open-source software, while the game content (such as graphics, audio and levels) may or may not be under a free license. Examples
https://en.wikipedia.org/wiki/The%20Heroic%20Age%20of%20American%20Invention
The Heroic Age of American Invention is a science book for children by L. Sprague de Camp, published by Doubleday in 1961. It was reprinted in 1993 by Barnes & Noble under the alternate title The Heroes of American Invention. The book has been translated in Portuguese. Summary By "heroic age" the author means the era of American history in which individual initiative and enterprise constituted the primary thread in technical innovation, roughly from the early 19th century until mass production and corporate enterprise outpaced that of the individual around the time of World War I. The story of innovation is told through the biographies and inventions of thirty-two key inventors of the United States' industrial revolution, whom de Camp feels were pivotal in converting the country from an agrarian nation to an industrial one. Some of the inventors spotlighted include Robert L. Stevens, George Westinghouse, Joseph Henry, Samuel Morse, Samuel Colt, Hiram Stevens Maxim, Hudson Maxim, Cyrus McCormick, John Ericsson, William Kelly, Ottmar Mergenthaler, Christopher Latham Sholes, Alexander Graham Bell, Thomas Edison, Elihu Thomson, Nikola Tesla, George Baldwin Selden, Samuel Pierpoint Langley, Wilbur Wright, Orville Wright, Reginald Aubrey Fessenden, Lee de Forest, and Edwin Howard Armstrong. Contents I. Invention Comes to America II. The Heroic Age Begins III. The Stevenses and Railroading IV. Henry, Morse, and the Telegraph V. Colt and Other Gunmakers VI. McCormick and Farm Machinery VII. Ericsson and the Modern Warship VIII. Kelly and Steel Refining IX. Mergenthaler, Sholes, and Writing Machines X. Bell and the Telephone XI. Edison and the Electric Light XII. Thomson and Alternating-Current Power XIII. Selden and the Automobile XIV. Langley, The Wrights, and Flying XV. Fessenden, De Forest, and Radio XVI. The End of the Heroic Age Notes Bibliography Index References 1961 children's books Children's non-fiction books Technology books Books by L. Sprague de Camp Doubl
https://en.wikipedia.org/wiki/Spiegelman%27s%20Monster
Spiegelman's Monster is an RNA chain of only 218 nucleotides that is able to be reproduced by the RNA replication enzyme RNA-dependent RNA polymerase, also called RNA replicase. It is named after its creator, Sol Spiegelman, of the University of Illinois at Urbana-Champaign who first described it in 1965. Description Spiegelman introduced RNA from a simple bacteriophage Qβ (Qβ) into a solution which contained Qβ's RNA replicase, some free nucleotides, and some salts. In this environment, the RNA started to be replicated. After a while, Spiegelman took some RNA and moved it to another tube with fresh solution. This process was repeated. Shorter RNA chains were able to be replicated faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. This short RNA sequence replicated very quickly in these unnatural circumstances. Further work M. Sumper and R. Luce of Manfred Eigen's laboratory replicated the experiment, except without adding RNA, only RNA bases and Qβ replicase. They found that under the right conditions the Qβ replicase can spontaneously generate RNA which evolves into a form similar to Spiegelman's Monster. Eigen built on Spiegelman's work and produced a similar system further degraded to just 48 or 54 nucleotides—the minimum required for the binding of the replication enzyme, this time a combination of HIV-1 reverse transcriptase and T7 RNA polymerase. See also Abiogenesis RNA world hypothesis PAH world hypothesis Viroid References External links ASA - January 2000: almost life Not-so-Final Answers - The origin of life Origin of life RNA Molecular evolution
https://en.wikipedia.org/wiki/Musical%20clock
A musical clock is a clock that marks the hours of the day with a musical tune. They can be considered elaborate versions of striking or chiming clocks. Elaborate large-scale musical clocks with automatons are often installed in public places and are widespread in Japan. Unlike conventional electronic musical clocks, these clocks plays pre-recorded music samples, instead of using programmed sound synthesis. One of the earliest known domestic musical clocks was constructed by Nicholas Vallin in 1598, and it currently resides in the British Museum in London. Description The music on mechanical clocks is typically played from a spiked cylinder on bells, organ pipes, or bellows. On electric clocks such as quartz clocks, the music is usually generated using an electronic sound module. Most of these quartz musical clocks utilize either FM synthesis or sample-based synthesis technology for sound generation to produce high-fidelity and complex music, similar to the sound generation methods of electronic musical instruments. Pipe organ clock The pipe organ clock was a specific clock that chimed with a small pipe organ built into the unit. An example is a Markwick Markham made for the Turkish market, circa 1770. Popularity in Japan In Japan, aside from the extensive popularity of large-scale musical clocks installed in public facilities, electronic musical wall clocks has become a popular novelty items since the late 1990s. They are mostly collected for their aesthetic and decorative values, especially those with elaborate movements and advanced music generation. See also Automaton clock Music by CPE Bach for musical clock References External links Clock designs Mechanical musical instruments
https://en.wikipedia.org/wiki/Automaton%20clock
An automaton clock or automata clock is a type of striking clock featuring automatons. Clocks like these were built from the 1st century BC through to Victorian times in Europe. A Cuckoo clock is a simple form of this type of clock. The first known mention is of those created by the Roman engineer Vitruvius, describing early alarm clocks working with gongs or trumpets. Later automatons usually perform on the hour, half-hour or quarter-hour, usually to strike bells. Common figures in older clocks include Death (as a reference to human mortality), Old Father Time, saints and angels. In the Regency and Victorian eras, common figures also included royalty, famous composers or industrialists. More recently constructed automaton clocks are widespread in Japan, where they are known as karakuri-dokei. Notable examples of such clocks include the Ni-Tele Really Big Clock, designed by Hayao Miyazaki to be affixed on the Nippon Television headquarters in Tokyo, touted to be the largest animated clock in the world. In the United Kingdom, Kit Williams produced a series of large automaton clocks for a handful of British shopping centres, featuring frogs, ducks and fish. Seiko and Rhythm Clock are known for their battery-powered musical clocks, which frequently feature flashing lights, automatons and other moving parts designed to attract attention while in motion. References Mechanical engineering Clock designs Articles containing video clips Karakuri
https://en.wikipedia.org/wiki/International%20Conference%20on%20Information%20Processing%20in%20Sensor%20Networks
IPSN, the IEEE/ACM International Conference on Information Processing in Sensor Networks, is an academic conference on sensor networks with its main focus on information processing aspects of sensor networks. IPSN draws upon many disciplines including signal and image processing, information and coding theory, networking and protocols, distributed algorithms, wireless communications, machine learning, embedded systems design, and data bases and information management. IPSN Events IPSN started in 2001, and following is a list of IPSN events from 2001 to 2014: 13th IPSN 2014, Berlin, Germany, April 15–17, 2014 12th IPSN 2013, Philadelphia, PA, USA, April 8–11, 2013 11th IPSN 2012, Beijing, China, April 16–19, 2012 10th IPSN 2011, Chicago, IL, USA, April 12–14, 2011 9th IPSN 2010, Stockholm, Sweden, April 12–16, 2010 8th IPSN 2009, San Francisco, California, USA, April 13–16, 2009 7th IPSN 2008, (Washington U.) St. Louis, Missouri, USA, April 22–24, 2008 6th IPSN 2007, (MIT) Cambridge, MA, USA, April 25–27, 2007 5th IPSN 2006, (Vanderbilt) Nashville, Tennessee, USA, April 19–21, 2006 4th IPSN 2005, (UCLA) Los Angeles, CA, USA, April 25–27, 2005 3rd IPSN 2004, (UC Berkeley) Berkeley, CA, USA, April 26–27, 2004 2nd IPSN 2003, (Xerox PARC) Palo Alto, CA, USA, April 22–23, 2003 CSP Workshop 2001, (Xerox PARC) Palo Alto, CA (see history subsection for name explanation) Ranking Although there is no official ranking of academic conferences on wireless sensor networks, IPSN is widely regarded by researchers as one of the two (along with SenSys) most prestigious conferences focusing on sensor network research. SenSys focuses more on system issues while IPSN on algorithmic and theoretical considerations. The acceptance rate for 2006 was 15.2% for oral presentations, 25% overall (25 papers +17 poster presentations, out of 165 submissions accepted). History IPSN started off as a workshop at Xerox Palo Alto Research Center in 2001, and it was initially called Colla
https://en.wikipedia.org/wiki/Kernel%20principal%20component%20analysis
In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space. Background: Linear PCA Recall that conventional PCA operates on zero-centered data; that is, , where is one of the multivariate observations. It operates by diagonalizing the covariance matrix, in other words, it gives an eigendecomposition of the covariance matrix: which can be rewritten as . (See also: Covariance matrix as a linear operator) Introduction of the Kernel to PCA To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot, in general, be linearly separated in dimensions, they can almost always be linearly separated in dimensions. That is, given N points, , if we map them to an N-dimensional space with where , it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition explicitly as we would in linear PCA. Instead, in kernel PCA, a non-trivial, arbitrary function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional 's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the -space, which we will call the 'feature space', we can create the N-by-N kernel which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the -space (see Kernel trick). The N-elements in each column of K represent the dot product of one point of the tr
https://en.wikipedia.org/wiki/Corvus%20Systems
Corvus Systems was a computer technology company that offered, at various points in its history, computer hardware, software, and complete PC systems. History Corvus was founded by Michael D'Addio and Mark Hahn in 1979. This San Jose, Silicon Valley company pioneered in the early days of personal computers, producing the first hard disk drives, data backup, and networking devices, commonly for the Apple II series. The combination of disk storage, backup, and networking was very popular in primary and secondary education. A classroom would have a single drive and backup with a full classroom of Apple II computers networked together. Students would log in each time they use the computer and access their work via the Corvus Omninet network, which also supported eMail. They went public in 1981 and were traded on the NASDAQ exchange. In 1985 Corvus acquired a company named Onyx & IMI. IMI (International Memories Incorporated) manufactured the hard disks used by Corvus. The New York Times followed their financial fortunes. They were a modest success in the stock market during their first few years as a public company. The company's founders left Corvus in 1985 as the remaining board of directors made the decision to enter the PC clone market. D'Addio and Hahn went on to found Videonics in 1986, the same year Corvus discontinued hardware manufacturing. In 1987, Corvus filed for Chapter 11. That same year two top executives left. Its demise was partially caused by Ethernet establishing itself over Omninet as the local area network standard for PCs, and partially by the decision to become a PC clone company in a crowded and unprofitable market space. Disk drives and backup The company modified the Apple II's DOS operating system to enable using Corvuss 10 MB Winchester technology hard disk drives. Apple DOS normally was limited to the usage of 140 KB floppy disks. The Corvus disks not only increased the size of available storage but were also considerably faster than f
https://en.wikipedia.org/wiki/E.B.%20Wilson%20Medal
The E.B. Wilson Medal is the American Society for Cell Biology's highest honor for science and is presented at the Annual Meeting of the Society for significant and far-reaching contributions to cell biology over the course of a career. It is named after Edmund Beecher Wilson. Medalists Source : ASCB See also List of medicine awards References American Society for Cell Biology Medicine awards Biology awards American awards Awards established in 1981
https://en.wikipedia.org/wiki/Keith%20R.%20Porter%20Lecture
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards References American Society for Cell Biology Biology education American awards Recurring events established in 1982 1982 establishments in the United States Science lecture series Biological events
https://en.wikipedia.org/wiki/WICB%20Junior%20and%20Senior%20Awards
The Women In Cell Biology Committee of the American Society for Cell Biology (ASCB) recognizes outstanding achievements by women in cell biology by presenting three (previously only two) Career Recognition Awards at the ASCB Annual Meeting. The Junior Award is given to a woman in an early stage of her career (generally seven or eight years in an independent position) who has made exceptional scientific contributions to cell biology and exhibits the potential for continuing a high level of scientific endeavor while fostering the career development of damaged young scientists. The Mid-Career Award (introduced in 2012) is given to a woman at the mid-career level who has made exceptional scientific contributions to cell biology and/or has effectively translated cell biology across disciplines, and who exemplifies a high level of scientific endeavor and leadership. The Senior Award is given to a woman or man in a later career stage (generally full professor or equivalent) whose outstanding scientific achievements are coupled with a long-standing record of support for women in science and by mentorship of both men and women in scientific careers. Senior awardees Source: WICB 2020 Erika Holzbaur 2019 Rong Li 2018 Eva Nogales 2017 Harvey Lodish 2016 Susan Gerbi 2015 Angelika Amon 2014 Sandra L. Schmid 2013 Lucille Shapiro 2012 Marianne Bronner 2011 Susan Rae Wente 2010 Zena Werb 2009 Janet Rossant 2008 Fiona Watt 2007 Frances Brodsky 2006 Joseph Gall 2005 Elizabeth Blackburn 2004 Susan Lindquist 2003 Philip Stahl 2002 Natasha Raikhel 2001 Joan Brugge 2000 Shirley Tilghman 1999 Ursula Goodenough 1998 Christine Guthrie 1997 Elaine Fuchs 1996 Sarah C. R. Elgin 1995 Virginia Zakian 1994 Ann Hubbard 1993 Mina Bissell 1992 Helen Blau 1991 Hynda Kleinman 1990 Dorthea Wilson and Rosemary Simpson 1989 Dorothy Bainton 1988 No Awardees selected 1987 Dorothy M. Skinner 1986 Mary Clutter Mid-Career awardees Source: WICB 2020 Daniela Nicastro and Anne E. Carpenter 2019 Coleen T. Murp
https://en.wikipedia.org/wiki/Early%20Career%20Life%20Scientist%20Award
The ASCB Early Career Life Scientist Award is awarded by the American Society for Cell Biology to an outstanding scientist who earned his doctorate no more than 12 years earlier and who has served as an independent investigator for no more than seven years. The winner speaks at the ASCB Annual Meeting and receives a monetary prize. Awardees Source: American Society for Cell Biology 2020 James Olzmann 2019 Cignall Kadoch 2018 Sergiu Pasca 2017 Meng Wang 2016 Bo Huang;Valentina Greco 2015 Vladimir Denic 2014 Manuel Thery 2013 Douglas B. Weibel 2012 Iain Cheeseman 2012 Gia Voeltz 2011 Maxence V. Nachury 2010 Anna Kashina 2009 Martin W. Hetzer 2008 Arshad B. Desai 2007 Abby Dernburg 2006 Karsten Weis 2005 Eva Nogales 2004 No award this year 2003 Frank Gertler 2002 Kathleen Collins and Benjamin Cravatt 2001 Daphne Preuss 2000 Erin O'Shea 1999 Raymond Deshaies See also List of biology awards References American Society for Cell Biology Biology awards Early career awards American science and technology awards Awards established in 1999 1999 establishments in the United States
https://en.wikipedia.org/wiki/De%20Bruijn%20torus
In combinatorial mathematics, a De Bruijn torus, named after Dutch mathematician Nicolaas Govert de Bruijn, is an array of symbols from an alphabet (often just 0 and 1) that contains every possible matrix of given dimensions exactly once. It is a torus because the edges are considered wraparound for the purpose of finding matrices. Its name comes from the De Bruijn sequence, which can be considered a special case where (one dimension). One of the main open questions regarding De Bruijn tori is whether a De Bruijn torus for a particular alphabet size can be constructed for a given and . It is known that these always exist when , since then we simply get the De Bruijn sequences, which always exist. It is also known that "square" tori exist whenever and even (for the odd case the resulting tori cannot be square). The smallest possible binary "square" de Bruijn torus, depicted above right, denoted as de Bruijn torus (or simply as ), contains all binary matrices. B2 Apart from "translation", "inversion" (exchanging 0s and 1s) and "rotation" (by 90 degrees), no other de Bruijn tori are possible – this can be shown by complete inspection of all 216 binary matrices (or subset fulfilling constrains such as equal numbers of 0s and 1s). The torus can be unrolled by repeating n−1 rows and columns. All n×n submatrices without wraparound, such as the one shaded yellow, then form the complete set: {| class="wikitable" | 1 || style="background:#ccc;"|0 || 1 || 1 || rowspan="5" style="border-left:solid; padding:0;"| || 1 |- | 1 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || 1 |- | style="background:#ccc;"|0 || style="background:#ccc;"|0 || style="background:#ccc;"|0 || 1 || style="background:#ccc;"|0 |- | 1 || 1 || style="background:#ccc;"|0 || style="background:#ff0;"|1 || style="background:#ff0;"|1 |- style="border-top:solid;" | 1 || style="background:#ccc;"|0 || 1 || style="background:#ff0;"|1 || style="background:#ff0;
https://en.wikipedia.org/wiki/Ravi%20Vakil
Ravi D. Vakil (born February 22, 1970) is a Canadian-American mathematician working in algebraic geometry. Education and career Vakil attended high school at Martingrove Collegiate Institute in Etobicoke, Ontario, where he won several mathematical contests and olympiads. After earning a BSc and MSc from the University of Toronto in 1992, he completed a PhD in mathematics at Harvard University in 1997 under Joe Harris. He has since been an instructor at both Princeton University and MIT. Since the fall of 2001, he has taught at Stanford University, becoming a full professor in 2007. Contributions Vakil is an algebraic geometer and his research work spans over enumerative geometry, topology, Gromov–Witten theory, and classical algebraic geometry. He has solved several old problems in Schubert calculus. Among other results, he proved that all Schubert problems are enumerative over the real numbers, a result that resolves an issue mathematicians have worked on for at least two decades. Awards and honors Vakil has received many awards, including an NSF CAREER Fellowship, a Sloan Research Fellowship, an American Mathematical Society Centennial Fellowship, a G. de B. Robinson prize for the best paper published (2000) in the Canadian Journal of Mathematics and the Canadian Mathematical Bulletin, and the André-Aisenstadt Prize from the Centre de Recherches Mathématiques at the Université de Montréal (2005), and the Chauvenet Prize (2014).. In 2012 he became a fellow of the American Mathematical Society. Mathematics contests He was a member of the Canadian team in three International Mathematical Olympiads, winning silver, gold (perfect score), and gold in 1986, 1987, and 1988 respectively. He was also the fourth person to be a four-time Putnam Fellow in the history of the contest. Also, he has been the coordinator of weekly Putnam preparation seminars at Stanford. References External links Ravi Vakil's Home Page The Rising Sea | Ravi's notes on algebraic ge
https://en.wikipedia.org/wiki/Star%20domain
In geometry, a set in the Euclidean space is called a star domain (or star-convex set, star-shaped set or radially convex set) if there exists an such that for all the line segment from to lies in This definition is immediately generalizable to any real, or complex, vector space. Intuitively, if one thinks of as a region surrounded by a wall, is a star domain if one can find a vantage point in from which any point in is within line-of-sight. A similar, but distinct, concept is that of a radial set. Definition Given two points and in a vector space (such as Euclidean space ), the convex hull of is called the and it is denoted by where for every vector A subset of a vector space is said to be if for every the closed interval A set is and is called a if there exists some point such that is star-shaped at A set that is star-shaped at the origin is sometimes called a . Such sets are closed related to Minkowski functionals. Examples Any line or plane in is a star domain. A line or a plane with a single point removed is not a star domain. If is a set in the set obtained by connecting all points in to the origin is a star domain. Any non-empty convex set is a star domain. A set is convex if and only if it is a star domain with respect to any point in that set. A cross-shaped figure is a star domain but is not convex. A star-shaped polygon is a star domain whose boundary is a sequence of connected line segments. Properties The closure of a star domain is a star domain, but the interior of a star domain is not necessarily a star domain. Every star domain is a contractible set, via a straight-line homotopy. In particular, any star domain is a simply connected set. Every star domain, and only a star domain, can be "shrunken into itself"; that is, for every dilation ratio the star domain can be dilated by a ratio such that the dilated star domain is contained in the original star domain. The union and intersection of
https://en.wikipedia.org/wiki/The%20Ancient%20Engineers
The Ancient Engineers is a 1963 science book by L. Sprague de Camp, one of his most popular works. It was first published by Doubleday and has been reprinted numerous times by other publishers. Translations into German and Polish have also been published. Portions of the work had previously appeared as articles in the magazines Fate, Isis and Science Digest. Contents The work is an examination of engineering through the ages from 3000 BC to 1519 AD, from the monumental works of the Egyptians through the speculative inventions of Leonardo da Vinci. The technological legacies of Mesopotamia, Egypt, Greece, Rome, China, the medieval Arabs and Europeans, and Renaissance Europe, are all covered in separate sections, focusing particularly on architectural, military and civil engineering. Review The following review is often quoted in reference to this book: Mr. de Camp has the trick of being able to show technology engaging in feats as full of derring-do as those of Hannibal's army. History as it should be told. —Isaac Asimov, The New York Times Book Review, 15 May 1963 See also Lest Darkness Fall Notes External links 1963 non-fiction books Books by L. Sprague de Camp Doubleday (publisher) books Science books
https://en.wikipedia.org/wiki/Cohn%27s%20irreducibility%20criterion
Arthur Cohn's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in —that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients. The criterion is often stated as follows: If a prime number is expressed in base 10 as (where ) then the polynomial is irreducible in . The theorem can be generalized to other bases as follows: Assume that is a natural number and is a polynomial such that . If is a prime number then is irreducible in . The base 10 version of the theorem is attributed to Cohn by Pólya and Szegő in one of their books while the generalization to any base b is due to Brillhart, Filaseta, and Odlyzko. In 2002, Ram Murty gave a simplified proof as well as some history of the theorem in a paper that is available online. A further generalization of the theorem allowing coefficients larger than digits was given by Filaseta and Gross. In particular, let be a polynomial with non-negative integer coefficients such that is prime. If all coefficients are 49598666989151226098104244512918, then is irreducible over . Moreover, they proved that this bound is also sharp. In other words, coefficients larger than 49598666989151226098104244512918 do not guarantee irreducibility. The method of Filaseta and Gross was also generalized to provide similar sharp bounds for some other bases by Cole, Dunn, and Filaseta. The converse of this criterion is that, if p is an irreducible polynomial with integer coefficients that have greatest common divisor 1, then there exists a base such that the coefficients of p form the representation of a prime number in that base; this is the Bunyakovsky conjecture and its truth or falsity remains an open question. Historical notes Polya and Szegő gave their own generalization but it has many side conditions (on the locations of the roots, for instance) so it lacks the elegance of Brillhart's, Filaseta's, and Odlyzko's generalization. It is clear from c
https://en.wikipedia.org/wiki/Sign-value%20notation
A sign-value notation represents numbers using a sequence of numerals which each represent a distinct quantity, regardless of their position in the sequence. Sign-value notations are typically additive, subtractive, or multiplicative depending on their conventions for grouping signs together to collectively represent numbers. Although the absolute value of each sign is independent of its position, the value of the sequence as a whole may depend on the order of the signs, as with numeral systems which combine additive and subtractive notation, such as Roman numerals. There is no need for zero in sign-value notation. Additive notation Additive notation represents numbers by a series of numerals that added together equal the value of the number represented, much as tally marks are added together to represent a larger number. To represent multiples of the sign value, the same sign is simply repeated. In Roman numerals, for example, means ten and means fifty, so means eighty (50 + 10 + 10 + 10). Although signs may be written in a conventional order the value of each sign does not depend on its place in the sequence, and changing the order does not affect the total value of the sequence in an additive system. Frequently used large numbers are often expressed using unique symbols to avoid excessive repetition. Aztec numerals, for example, use a tally of dots for numbers less than twenty alongside unique symbols for powers of twenty, including 400 and 8,000. Subtractive notation Subtractive notation represents numbers by a series of numerals in which signs representing smaller values are typically subtracted from those representing larger values to equal the value of the number represented. In Roman numerals, for example, means one and means ten, so means nine (10 − 1). The consistent use of the subtractive system with Roman numerals was not standardised until after the widespread adoption of the printing press in Europe. History Sign-value notation was the
https://en.wikipedia.org/wiki/Recursive%20function
Recursive function may refer to: Recursive function (programming), a function which references itself General recursive function, a computable partial function from natural numbers to natural numbers Primitive recursive function, a function which can be computed with loops of bounded length Another name for computable function See also Recurrence relation, an equation which defines a sequence from initial values Recursion theory, the study of computability Recursion
https://en.wikipedia.org/wiki/Xybots
Xybots is a 1987 third-person shooter arcade game by Atari Games. In Xybots, up to two players control "Major Rock Hardy" and "Captain Ace Gunn", who must travel through a 3D maze and fight against a series of robots known as the Xybots whose mission is to destroy all mankind. The game features a split screen display showing the gameplay on the bottom half of the screen and information on player status and the current level on the top half. Designed by Ed Logg, it was originally conceived as a sequel to his previous title, Gauntlet. The game was well received, with reviewers lauding the game's various features, particularly the cooperative multiplayer aspect. Despite this, it was met with limited financial success, which has been attributed to its unique control scheme that involves rotating the joystick to turn the player character. Xybots was ported to various personal computers and the Atari Lynx handheld. Versions for the Nintendo Entertainment System and Sega Genesis/Mega Drive were announced, but never released. Emulated versions of the arcade game were later released as part of various compilations, starting with Midway Arcade Treasures 2 in 2004. Gameplay One or two players navigate through corridors as either Rock Hardy or Ace Gunn, battling enemy Xybots with a laser gun, seeking cover from enemy fire behind various objects and attempting to reach the level's exit. In certain levels, players face off against a large boss Xybot. Players move using the joystick, which also rotates to turn the player character. The lower half the screen shows the gameplay area for both players while the upper half is split between the map for the current level and the status display for each player. The display shows the player's remaining energy, which can be replenished by collecting energy pods within the levels. Energy can also be purchased at shops between levels, using coins dropped by defeated Xybots. The player can also purchase power-ups at these shops, including e
https://en.wikipedia.org/wiki/Uncertainty%20quantification
Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense. Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification. Sources Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider: Parameter This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization. Parametric This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance. Structural uncertainty Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true sys
https://en.wikipedia.org/wiki/Logical%20machine
A logical machine or logical abacus is a tool containing a set of parts that uses energy to perform formal logic operations through the use of truth tables. Early logical machines were mechanical devices that performed basic operations in Boolean logic. The principal examples of such machines are those of William Stanley Jevons (logic piano), John Venn, and Allan Marquand. Contemporary logical machines are computer-based electronic programs that perform proof assistance with theorems in mathematical logic. In the 21st century, these proof assistant programs have given birth to a new field of study called mathematical knowledge management. Origins The earliest logical machines were mechanical constructs built in the late 19th century. William Stanley Jevons invented the first logical machine in 1869, the logic piano. In 1883, Allan Marquand invented a new logical machine that performed the same operations as Jevons' logic piano but with improvements in design simplification, portability, and input-output controls. A logical abacus is constructed to show all the possible combinations of a set of logical terms with their negatives, and, further, the way in which these combinations are affected by the addition of attributes or other limiting words, i.e., to simplify mechanically the solution of logical problems. These instruments are all more or less elaborate developments of the "logical slate", on which were written in vertical columns all the combinations of symbols or letters which could be made logically out of a definite number of terms. These were compared with any given premises, and those which were incompatible were crossed off. In the abacus the combinations are inscribed each on a single slip of wood or similar substance, which is moved by a key; incompatible combinations can thus be mechanically removed at will, in accordance with any given series of premises. See also Allan Marquand William Stanley Jevons Logics for computability References Bibl
https://en.wikipedia.org/wiki/Earcon
An earcon is a brief, distinctive sound that represents a specific event or conveys other information. Earcons are a common feature of computer operating systems and applications, ranging from a simple beep to indicate an error, to the customizable sound schemes of modern operating systems that indicate startup, shutdown, and other events. The name is a pun on the more familiar term icon in computer interfaces. Icon sounds like "eye-con" and is visual, which inspired D.A. Sumikawa to coin "earcon" as the auditory equivalent in a 1985 article, 'Guidelines for the integration of audio cues into computer user interfaces.' The term is most commonly applied to sound cues in a computer interface, but examples of the concept occur in broadcast media such as radio and television: The alert signal that indicates a message from the Emergency Broadcast System The signature three-tone melody that identifies NBC in radio and television broadcasts Earcons are generally synthesized tones or sound patterns. The similar term auditory icon refers to recorded everyday sounds that serve the same purpose. Use in assistive technologies Assistive technologies for computing devices—such as screen readers including ChromeOS's ChromeVox, Android's TalkBack and Apple's VoiceOver—use earcons as a convenient and fast means of conveying to blind or visually impaired users contextual information about the interface they are navigating. Earcons in screen readers largely serve as auditory cues to inform the user that they have selected a particular type of interface element, such as a button, hyperlink or text input field. They can also provide context about the current document or mode, such as whether a web page is loading. Earcons provide an enhancement to screen reader usage due to their brevity and subtleness, which is an improvement over using much longer spoken cues to provide context: using a short, distinctive beep when an interface's button is selected can be much faster and ther
https://en.wikipedia.org/wiki/Ain%20al-Yaqeen
Ain al Yaqeen (Heart of the Matter in English) is an Arabic news magazine published weekly, focusing on political topics. Profile Ain al Yaqeen also has an English edition. It is published online. The magazine is seen as a government publication or as a semi-official weekly political magazine. Contents After it was revealed that a member of the royal family had indirectly funded one of the hijackers in the September 11 attacks, Prince Nayef in an article published in the English edition of the weekly on 29 November 2002 claimed that the Jews were behind the attacks. See also List of magazines in Saudi Arabia References Magazines published in Saudi Arabia Arabic-language magazines English-language magazines Saudi Arabian news websites English-language websites Arabic-language websites Weekly magazines Online magazines News magazines published in Asia
https://en.wikipedia.org/wiki/Guitar%20tech
A guitar technician (or guitar tech) is a member of a music ensemble's road crew who maintains and sets up the musical equipment for one or more guitarists. Depending on the type and size of band, the guitar tech may be responsible for stringing, tuning, and adjusting electric guitars and acoustic guitars, and maintaining and setting up guitar amplifiers and other related electronic equipment such as effect pedals. Once the guitar equipment has been set up onstage, the guitar tech does a soundcheck to ensure that the equipment is working well. If there are any problems, the guitar tech replaces or repairs the faulty components or equipment. Since guitar techs need to soundcheck the instruments and amplifiers, they must have basic guitar-playing skills, a musical "ear" for tuning, and a familiarity with the way guitars, amplifiers, and effect pedals are supposed to sound in the style of music of their band. Guitar techs learn their craft either "on the job", by working in a range of music, sound engineering, and instrument repair jobs; by completing a guitar repair program at a college or lutherie school; or from a combination of these two routes. The salaries and conditions of work for guitar techs vary widely, depending on whether a guitar tech is working for a minor or regional touring bar band or a major international touring act. Duties Setting up and soundchecking The duties of a guitar technician depend on the type of band they are working for, and on a range of other factors such as the size and nature of the stage show and the length of the show. Guitar technicians who work for an acoustic band, such as a folk group or bluegrass ensemble may be responsible for setting up and stringing, and tuning a range of stringed, fretted instruments including acoustic guitars, dobros, and mandolins. A guitar tech for a heavy metal band, on the other hand, may focus mainly on electric guitars, guitar amplifiers, and effects pedals. A guitar tech may change the sequen
https://en.wikipedia.org/wiki/Ussing%20chamber
An Ussing chamber is an apparatus for measuring epithelial membrane properties. It can detect and quantify transport and barrier functions of living tissue. The Ussing chamber was invented by the Danish zoologist and physiologist Hans Henriksen Ussing in 1946. The technique is used to measure the short-circuit current as an indicator of net ion transport taking place across an epithelium. Ussing chambers are used to measure ion transport in native tissue, such as gut mucosa, and in a monolayer of cells grown on permeable supports. Function The Ussing chamber provides a system to measure the transport of ions, nutrients, and drugs across various epithelial tissues, (although can generate false-negative results for lipophilic substances). It consists of two halves separated by the epithelia (sheet of mucosa or monolayer of epithelial cells grown on permeable supports). Epithelia are polar in nature, i.e., they have an apical or mucosal side and a basolateral or serosal side. An Ussing chamber can isolate the apical side from the basolateral side. The two half chambers are filled with equal amounts of symmetrical Ringer solution to remove chemical, mechanical or electrical driving forces. Ion transport takes place across any epithelium. Transport may be in either direction. Ion transport produces a potential difference (voltage difference) across the epithelium. The voltage is measured using two voltage electrodes placed near the tissue/epithelium. This voltage is cancelled out by injecting current, using two other current electrodes placed away from the epithelium. This short-circuit current (Isc) is the measure of net ion transport. Measuring epithelial ion transport is helped by Ussing chambers. The voltage result from this ion transport is easy to accurately measure. The epithelium pumps ions from one side to the other and the ions leak back through so-called tight junctions that are situated between the epithelial cells. To measure ion transport, an external
https://en.wikipedia.org/wiki/MBC%20Paper%20of%20the%20Year
Chosen by Molecular Biology of the Cell Associate Editors, the MBoC Paper of the Year is awarded to the first author of the paper judged to be the best of the year in the field of molecular biology, from June to May. Awardees Source: MBoC See also List of biology awards References American Society for Cell Biology Biology awards American science and technology awards Awards established in 1991 1991 establishments in the United States
https://en.wikipedia.org/wiki/Merton%20Bernfield%20Memorial%20Award
The Merton Bernfield Memorial Award, formerly known as the Member Memorial Award For Graduate Students and Postdoctoral Fellows, was established in memory of deceased colleagues donations from members of the American Society for Cell Biology. The winner is selected on merit and is invited to speak in Minisymposium at the ASCB Annual Meeting. The winner also receives financial support. Awardees Source: 2019 Veena Padmanaban 2018 Kelsie Eichel 2017 Lawrence Kazak 2016 Kara McKinley 2015 Shigeki Watanabe 2014 Prasanna Satpute-Krishnan 2013 Panteleimon Rompolas 2012 Ting Chen and Gabriel Lander 2011 Dylan Tyler Burnette 2010 Hua Jin 2009 Chad G. Pearson 2008 Kenneth Campellone 2007 Ethan Garner 2006 Lloyd Trotman 2005 Stephanie Gupton 2004 Chun Han 2003 Erik Dent 2002 Christina Hull 2001 Sarah South and James Wohlschlegel See also List of biology awards References American Society for Cell Biology Biology awards American science and technology awards Awards established in 2001
https://en.wikipedia.org/wiki/Studio%20monitor
Studio monitors are loudspeakers in speaker enclosures specifically designed for professional audio production applications, such as recording studios, filmmaking, television studios, radio studios and project or home studios, where accurate audio reproduction is crucial. Among audio engineers, the term monitor implies that the speaker is designed to produce relatively flat (linear) phase and frequency responses. In other words, it exhibits minimal emphasis or de-emphasis of particular frequencies, the loudspeaker gives an accurate reproduction of the tonal qualities of the source audio ("uncolored" and "transparent" are synonyms), and there will be no relative phase shift of particular frequencies—meaning no distortion in sound-stage perspective for stereo recordings. Beyond stereo sound-stage requirements, a linear phase response helps impulse response remain true to source without encountering "smearing". An unqualified reference to a monitor often refers to a near-field (compact or close-field) design. This is a speaker small enough to sit on a stand or desk in proximity to the listener, so that most of the sound that the listener hears is coming directly from the speaker, rather than reflecting off walls and ceilings (and thus picking up coloration and reverberation from the room). Monitor speakers may include more than one type of driver (e.g., a tweeter and a woofer) or, for monitoring low-frequency sounds, such as bass drum, additional subwoofer cabinets may be used. There are studio monitors designed for mid-field or far-field use as well. These are larger monitors with approximately 12 inch or larger woofers, suited to the bigger studio environment. They extend the width of the sweet spot, allowing "accurate stereo imaging for multiple persons". They tend to be used in film scoring environments, where simulation of larger sized areas like theaters is important. Also, studio monitors are made in a more physically robust manner than home hi-fi loudspeak
https://en.wikipedia.org/wiki/Graduation%20tower
A graduation tower (occasionally referred to as a thorn house) is a structure used in the production of salt which removes water from a saline solution by evaporation, increasing its concentration of mineral salts. The tower consists of a wooden wall-like frame stuffed with bundles of brushwood (typically blackthorn) which have to be changed about every 5 to 10 years as they become encrusted with mineral deposits over time. The salt water runs down the tower and partly evaporates; at the same time, some minerals from the solution are left behind on the brushwood twigs. Graduation towers can be found in a number of spa towns, primarily in Germany but also Poland and Austria. The mineral-rich water droplets in the air are regarded as having beneficial health effects similar to that of breathing in sea air. A large complex of graduation towers is located in Ciechocinek and Inowrocław, Poland. This entirely wooden construction in Ciechocinek was erected in the 19th century by Stanisław Staszic. The complex consists of three graduation towers with a total length of over 2 km. Many tourists visit it for health reasons. Gallery Partial list of towns and cities with graduation towers With years of initial construction where available. Does not include modern indoor facilities found in some spas. France Saulnot (16th century) Arc-et-Senans (1775) Germany Bad Dürkheim (1736) Bad Dürrenberg Bad Essen Bad Karlshafen (1986) Bad Kissingen (16th century) Bad Kreuznach (1732) Bad Kösen Bad Münster am Stein (1729) Bad Nauheim Bad Oeynhausen Bad Orb (1806) Bad Rappenau (2008) Bad Reichenhall (1911) Bad Rothenfelde (1777) Bad Salzdetfurth Bad Salzelmen (part of Schönebeck, 1756) Bad Salzhausen (around 1600) Bad Salzuflen (18th century) Bad Salzungen Bad Sassendorf Bad Soden (part of Bad Soden-Salmünster, 2006) Bad Sooden-Allendorf Bad Staffelstein Eibach (part of Dillenburg, 2004) Hamm (2008) Lüneburg (1907) Rheine (Saline Gottesgabe) Salzgitter-Bad (2009) Salzkotten Pol
https://en.wikipedia.org/wiki/Pittsburgh%20Supercomputing%20Center
The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States. In addition to providing a family of Big Data-optimized supercomputers with unique shared memory architectures, PSC features the National Institutes of Health-sponsored National Resource for Biomedical Supercomputing, an Advanced Networking Group that conducts research on network performance and analysis, and a STEM education and outreach program supporting K-20 education. In 2012, PSC established a new Public Health Applications Group that will apply supercomputing resources to problems in preventing, monitoring and responding to epidemics and other public health needs. Mission The Pittsburgh Supercomputing Center provides university, government, and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data-handling and analysis available nationwide for unclassified research. As a resource provider in the Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation's network of integrated advanced digital resources, PSC works with its XSEDE partners to harness the full range of information technologies to enable discovery in U.S. science and engineering. Partnerships PSC is a leading partner in XSEDE. PSC-scientific co-director Ralph Roskies is a co-principal investigator of XSEDE and co-leads its Extended Collaborative Support Services. Other PSC staff lead XSEDE efforts in Networking, Incident Response, Systems & Software Engineering, Outreach, Allocations Coordination, and Novel & Innovative Projects. This NSF-funded program provides U.S. academic researchers with support for and access to leadership-class computing infrastructure and research. The Natio
https://en.wikipedia.org/wiki/Causes%20of%20gender%20incongruence
Gender incongruence is the state of having a gender identity that does not correspond to one's sex assigned at birth. This is experienced by people who identify as transgender or transsexual, and often results in gender dysphoria. The causes of gender incongruence have been studied for decades. Transgender brain studies, especially those on trans women attracted to women (gynephilic), and those on trans men attracted to men (androphilic), are limited, as they include only a small number of tested individuals. Studies conducted on twins suggest that there are likely genetic causes of gender incongruence, although the precise genes involved are not known or fully understood. Biological factors Genetics A 2008 study compared the genes of 112 trans women who were mostly already undergoing hormone treatment, with 258 cisgender male controls. Trans women were more likely than cisgender males to have a longer version of a receptor gene (longer repetitions of the gene) for the sex hormone androgen, which reduced its effectiveness at binding testosterone. The androgen receptor (NR3C4) is activated by the binding of testosterone or dihydrotestosterone, where it plays a critical role in the forming of primary and secondary male sex characteristics. The research weakly suggests reduced androgen and androgen signaling contributes to trans women's identity. The authors say that a decrease in testosterone levels in the brain during development might prevent complete masculinization of trans women's brains, thereby causing a more feminized brain and a female gender identity. A variant genotype for the CYP17 gene, which acts on the sex hormones pregnenolone and progesterone, has been found to be linked to transsexuality in trans men but not in trans women. Most notably, transmasculine subjects not only had the variant genotype more frequently, but had an allele distribution equivalent to cisgender male controls, unlike the cisgender female controls. The paper concluded that the
https://en.wikipedia.org/wiki/Apparent%20temperature
Apparent temperature, also known as "feels like", is the temperature equivalent perceived by humans, caused by the combined effects of air temperature, relative humidity and wind speed. The measure is most commonly applied to the perceived outdoor temperature. Apparent temperature was invented by Robert Steadman who published a paper about it in 1984. However, it also applies to indoor temperatures, especially saunas, and when houses and workplaces are not sufficiently heated or cooled. The heat index and humidex measure the effect of humidity on the perception of temperatures above . In humid conditions, the air feels much hotter, because less perspiration evaporates from the skin. The wind chill factor measures the effect of wind speed on cooling of the human body below . As airflow increases over the skin, more heat will be removed. Standard models and conditions are used. The wet-bulb globe temperature (WBGT) combines the effects of radiation (typically sunlight), humidity, temperature and wind speed on the perception of temperature. It is not often used, since its measurement requires the use of a globe thermometer exposed to the sun, which is not included in standard meteorological equipment used in official weather conditions reporting (nor are, in most cases, any other explicit means of measuring solar radiation; temperature measurement takes place entirely in a shade box to avoid direct solar effects). It also does not have an explicit relationship with the perceived temperature a person feels; when used for practical purposes, the WBGT is linked to a category system to estimate the threat of heat-related illness. Since there is no direct measurement of solar radiation in U.S. observation systems, and solar radiation can add up to to the apparent temperature, commercial weather companies have attempted to develop their own proprietary apparent temperature systems, including The Weather Company's "FeelsLike" and AccuWeather's "RealFeel". These systems,
https://en.wikipedia.org/wiki/The%20Sleuth%20Kit
The Sleuth Kit (TSK) is a library and collection of Unix- and Windows-based utilities for extracting data from disk drives and other storage so as to facilitate the forensic analysis of computer systems. It forms the foundation for Autopsy, a better known tool that is essentially a graphical user interface to the command line utilities bundled with The Sleuth Kit. The collection is open source and protected by the GPL, the CPL and the IPL. The software is under active development and it is supported by a team of developers. The initial development was done by Brian Carrier who based it on The Coroner's Toolkit. It is the official successor platform. The Sleuth Kit is capable of parsing NTFS, FAT/ExFAT, UFS 1/2, Ext2, Ext3, Ext4, HFS, ISO 9660 and YAFFS2 file systems either separately or within disk images stored in raw (dd), Expert Witness or AFF formats. The Sleuth Kit can be used to examine most Microsoft Windows, most Apple Macintosh OSX, many Linux and some other UNIX computers. The Sleuth Kit can be used via the included command line tools, or as a library embedded within a separate digital forensic tool such as Autopsy or log2timeline/plaso. Tools Some of the tools included in The Sleuth Kit include: ils lists all metadata entries, such as an Inode. blkls displays data blocks within a file system (formerly called dls). fls lists allocated and unallocated file names within a file system. fsstat displays file system statistical information about an image or storage medium. ffind searches for file names that point to a specified metadata entry. mactime creates a timeline of all files based upon their MAC times. disk_stat (currently Linux-only) discovers the existence of a Host Protected Area. Applications The Sleuth Kit can be used for use in forensics, its main purpose for understanding what data is stored on a disk drive, even if the operating system has removed all metadata. for recovering deleted image files summarizing all deleted f
https://en.wikipedia.org/wiki/RSS%20Advisory%20Board
The RSS Advisory Board is a group founded in July 2003 that publishes the RSS 0.9, RSS 0.91 and RSS 2.0 specifications and helps developers create RSS applications. Dave Winer, the lead author of several RSS specifications and a longtime evangelist of syndication, created the board to maintain the RSS 2.0 specification in cooperation with Harvard's Berkman Center. In January 2006, RSS Advisory Board chairman Rogers Cadenhead announced that eight new members had joined the group, continuing the development of the RSS format and resolving ambiguities in the RSS 2.0 specification. Netscape developer Christopher Finke joined the board in March 2007, the company's first involvement in RSS since the publication of RSS 0.91. In June 2007, the board revised its version of the specification to confirm that namespaces may extend core elements with namespace attributes, as Microsoft has done in Internet Explorer 7. In its view, a difference of interpretation left publishers unsure of whether this was permitted or forbidden. In January 2008, Netscape announced that the RSS 0.9 and RSS 0.91 specifications, document type definitions and related documentation that it had published since their creation in 1999 were moving to the board. Yahoo transferred the Media RSS specification to the board in December 2009. Current members Rogers Cadenhead Sterling Camden Simone Carletti James Holderness Jenny Levine Eric Lunt Randy Charles Morin Ryan Parman Paul Querna Jake Savin Jason Shellen References External links RSS Organizations established in 2003 Internet-related organizations
https://en.wikipedia.org/wiki/Setcontext
setcontext is one of a family of C library functions (the others being getcontext, makecontext and swapcontext) used for context control. The setcontext family allows the implementation in C of advanced control flow patterns such as iterators, fibers, and coroutines. They may be viewed as an advanced version of setjmp/longjmp; whereas the latter allows only a single non-local jump up the stack, setcontext allows the creation of multiple cooperative threads of control, each with its own stack. Specification setcontext was specified in POSIX.1-2001 and the Single Unix Specification, version 2, but not all Unix-like operating systems provide them. POSIX.1-2004 obsoleted these functions, and in POSIX.1-2008 they were removed, with POSIX Threads indicated as a possible replacement. Citing IEEE Std 1003.1, 2004 Edition: With the incorporation of the ISO/IEC 9899:1999 standard into this specification it was found that the ISO C standard (Subclause 6.11.6) specifies that the use of function declarators with empty parentheses is an obsolescent feature. Therefore, using the function prototype: void makecontext(ucontext_t *ucp, void (*func)(), int argc, ...); is making use of an obsolescent feature of the ISO C standard. Therefore, a strictly conforming POSIX application cannot use this form. Therefore, use of getcontext(), makecontext(), and swapcontext() is marked obsolescent. There is no way in the ISO C standard to specify a non-obsolescent function prototype indicating that a function will be called with an arbitrary number (including zero) of arguments of arbitrary types (including integers, pointers to data, pointers to functions, and composite types). Definitions The functions and associated types are defined in the ucontext.h system header file. This includes the ucontext_t type, with which all four functions operate: typedef struct { ucontext_t *uc_link; sigset_t uc_sigmask; stack_t uc_stack; mcontext_t uc_mcontext; ... } ucontext_t; uc_link poin
https://en.wikipedia.org/wiki/TuVox
TuVox is a company that produces VXML-based telephone speech-recognition applications to replace DTMF touch-tone systems for their clients. History TuVox was founded in 2001 by Steven S. Pollock and Ashok Khosla, formerly of Apple Computer Corporation and Claris Corporation. Since then, TuVox has grown to over 150 employees and has US offices in Cupertino, California and Boca Raton, Florida as well as international offices in London, Vancouver and Sydney. In 2005, TuVox acquired the customers and hosting facilities of Net-By-Tel. In 2007, the company raised $20m for its speech recognition, and phone menu software. On July 22, 2010, West Interactive — a subsidiary of West Corporation — announced its acquisition of TuVox. Customers TuVox clients include 1-800-Flowers.com, AMC Entertainment, American Airlines, British Airways, M&T Bank, Canon Inc., Gateway, Inc., Motorola, Progress Energy Inc., Telecom New Zealand, Time, Inc., BECU, Virgin America and USAA. References Telephony Speech processing Applications of artificial intelligence Companies established in 2001 Companies based in Cupertino, California
https://en.wikipedia.org/wiki/Stable%20isotope%20labeling%20by%20amino%20acids%20in%20cell%20culture
Stable Isotope Labeling by/with Amino acids in Cell culture (SILAC) is a technique based on mass spectrometry that detects differences in protein abundance among samples using non-radioactive isotopic labeling. It is a popular method for quantitative proteomics. Procedure Two populations of cells are cultivated in cell culture. One of the cell populations is fed with growth medium containing normal amino acids. In contrast, the second population is fed with growth medium containing amino acids labeled with stable (non-radioactive) heavy isotopes. For example, the medium can contain arginine labeled with six carbon-13 atoms (13C) instead of the normal carbon-12 (12C). When the cells are growing in this medium, they incorporate the heavy arginine into all of their proteins. Thereafter, all peptides containing a single arginine are 6 Da heavier than their normal counterparts. Alternatively, uniform labeling with 13C or 15N can be used. Proteins from both cell populations are combined and analyzed together by mass spectrometry as pairs of chemically identical peptides of different stable-isotope composition can be differentiated in a mass spectrometer owing to their mass difference. The ratio of peak intensities in the mass spectrum for such peptide pairs reflects the abundance ratio for the two proteins. Applications A SILAC approach involving incorporation of tyrosine labeled with nine carbon-13 atoms (13C) instead of the normal carbon-12 (12C) has been utilized to study tyrosine kinase substrates in signaling pathways. SILAC has emerged as a very powerful method to study cell signaling, post translation modifications such as phosphorylation, protein–protein interaction and regulation of gene expression. In addition, SILAC has become an important method in secretomics, the global study of secreted proteins and secretory pathways. It can be used to distinguish between proteins secreted by cells in culture and serum contaminants. Standardized protocols of SILAC for
https://en.wikipedia.org/wiki/Honeycomb%20%28geometry%29
In geometry, a honeycomb is a space filling or close packing of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Its dimension can be clarified as n-honeycomb for a honeycomb of n-dimensional space. Honeycombs are usually constructed in ordinary Euclidean ("flat") space. They may also be constructed in non-Euclidean spaces, such as hyperbolic honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Classification There are infinitely many honeycombs, which have only been partially classified. The more regular ones have attracted the most interest, while a rich and varied assortment of others continue to be discovered. The simplest honeycombs to build are formed from stacked layers or slabs of prisms based on some tessellations of the plane. In particular, for every parallelepiped, copies can fill space, with the cubic honeycomb being special because it is the only regular honeycomb in ordinary (Euclidean) space. Another interesting family is the Hill tetrahedra and their generalizations, which can also tile the space. Uniform 3-honeycombs A 3-dimensional uniform honeycomb is a honeycomb in 3-space composed of uniform polyhedral cells, and having all vertices the same (i.e., the group of [isometries of 3-space that preserve the tiling] is transitive on vertices). There are 28 convex examples in Euclidean 3-space, also called the Archimedean honeycombs. A honeycomb is called regular if the group of isometries preserving the tiling acts transitively on flags, where a flag is a vertex lying on an edge lying on a face lying on a cell. Every regular honeycomb is automatically uniform. However, there is just one regular honeycomb in Euclidean 3-space, the cubic honeycomb. Two are quasiregular (made from two types of regular cells): The tetrahedral-octahedral honeycomb and gyrat
https://en.wikipedia.org/wiki/Dolos
A dolos (plural: dolosse) is a wave-dissipating concrete block used in great numbers as a form of coastal management. It is a type of tetrapod. Weighing up to , dolosse are used to build revetments for protection against the erosive force of waves from a body of water. The dolos was invented in 1963, and was first deployed in 1964 on the breakwater of East London, a South African port city. Construction Dolosse are normally made from un-reinforced concrete, poured into a steel mould. The concrete will sometimes be mixed with small steel fibers to strengthen it in the absence of reinforcement. They are used to protect harbour walls, breakwaters and shore earthworks. In Dania Beach, Florida, dolosse are used as an artificial reef known as the Dania Beach Erojacks. They are also used to trap sea-sand to prevent erosion. Roughly 10,000 dolosse are required for a kilometre of coastline. They work by dissipating, rather than blocking, the energy of waves. Their design deflects most wave action energy to the side, making them more difficult to dislodge than objects of a similar weight presenting a flat surface. Though they are placed into position on top of each other by cranes, over time they tend to get further entangled as the waves shift them. Their design ensures that they form an interlocking but porous and slightly flexible wall. The individual units are often numbered so that their movements can be tracked. This helps engineers gauge whether they need to add more dolosse to the pile. Dolosse are also being used in rivers in the Pacific Northwest of the United States of America, to control erosion, prevent channel migration and to create and restore salmon habitat. Examples are engineered log jams, or ELJs, that may aid in efforts to save stocks of salmon. Local, county, state, federal and private industry experts in engineering design, fluvial geomorphology and fisheries conservation are working together to protect important public infrastructure such as r
https://en.wikipedia.org/wiki/VMware%20Workstation%20Player
VMware Workstation Player, formerly VMware Player, is a virtualization software package for x64 computers running Microsoft Windows or Linux, supplied free of charge by VMware, Inc. VMware Player can run existing virtual appliances and create its own virtual machines (which require that an operating system be installed to be functional). It uses the same virtualization core as VMware Workstation, a similar program with more features, which is not free of charge. VMware Player is available for personal non-commercial use, or for distribution or other use by written agreement. VMware, Inc. does not formally support Player, but there is an active community website for discussing and resolving issues, as well as a knowledge base. The free VMware Player was distinct from VMware Workstation until Player v7, Workstation v11. In 2015 the two packages were combined as VMware Workstation 12, with a free for non-commercial use Player version which, on purchase of a license code, either became the higher-specification VMware Workstation Pro, or allowed commercial use of Player. Features VMware claimed in 2011 that the Player offered better graphics, faster performance, and tighter integration for running Windows XP under Windows Vista or Windows 7 than Microsoft's Windows XP Mode running on Windows Virtual PC, which is free of charge for all purposes. Versions earlier than 3 of VMware Player were unable to create virtual machines (VMs), which had to be created by an application with the capability, or created manually by statements stored in a text file with extension ".vmx"; later versions can create VMs. The features of Workstation not available in Player are "developer-centric features such as Teams, multiple Snapshots and Clones, and Virtual Rights Management features for end-point security", and support by VMware. Player allows a complete virtual machine to be copied at any time by copying a directory; while not a fully featured snapshot facility, this allows a copy of
https://en.wikipedia.org/wiki/Equal-cost%20multi-path%20routing
Equal-cost multi-path routing (ECMP) is a routing strategy where packet forwarding to a single destination can occur over multiple best paths with equal routing priority. Multi-path routing can be used in conjunction with most routing protocols because it is a per-hop local decision made independently at each router. It can substantially increase bandwidth by load-balancing traffic over multiple paths; however, there may be significant problems in deploying it in practice. History Load balancing by per-packet multipath routing was generally disfavored due to the impact of rapidly changing latency, packet reordering and maximum transmission unit (MTU) differences within a network flow, which could disrupt the operation of many Internet protocols, most notably TCP and path MTU discovery. RFC 2992 analyzed one particular multipath routing strategy involving the assignment of flows through hashing flow-related data in the packet header. This solution is designed to avoid these problems by sending all packets from any particular network flow through the same path while balancing multiple flows over multiple paths in general. See also Link aggregation Shortest Path Bridgingestablishes multiple forward and reverse paths on Ethernet networks. Source routing TRILLenables per flow pair-wise load splitting without configuration and user intervention. References External links Etutorials: Equal-Cost Multi-Path (ECMP) Routing Paris-Traceroute: traceroute for ECMP networks Dublin-Traceroute: NAT-aware traceroute for ECMP networks Traffic Engineering With Equal-Cost-MultiPath: An Algorithmic Perspective Routing algorithms
https://en.wikipedia.org/wiki/History%20of%20biotechnology
Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services. From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power. Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s. The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. W
https://en.wikipedia.org/wiki/Digital%20photo%20frame
A digital photo frame (also called a digital media frame) is a picture frame that displays digital photos without the need of a computer or printer. The introduction of digital photo frames predates tablet computers, which can serve the same purpose in some situations; however, digital photo frames are generally designed specifically for the stationary, aesthetic display of photographs and therefore usually provide a nicer-looking frame and a power system designed for continuous use. Digital photo frames come in a variety of different shapes and sizes with a range of features. Some may even play videos as well as display photographs. Owners can choose a digital photo frame that utilizes a WiFi connection or not, comes with cloud storage, and/or USB and SD card hub. Features Digital photo frames range in size from tiny keychain-sized units to large wall-mounted frames spanning several feet. The most common sizes range from to . Some digital photo frames can only display JPEG pictures. Most digital photo frames display the photos as a slideshow and usually with an adjustable time interval. They may also be able to send photos to a printer, or have hybrid features. Examples are the Sony S-Frame F800, that has an integrated printer on its back, or the Epson PictureMate Show. Digital photo frames typically allow the display of pictures directly from a camera's memory card, and may provide internal memory storage. Some allow users to upload pictures to the frame's memory via a USB connection, or wirelessly via Bluetooth technology. Others include support for wireless (802.11) connections or use cellular technology to transfer and share files. Some frames allow photos to be shared from a frame to another. Certain frames provide specific application support such as loading images over the Internet from RSS feeds, photo sharing sites such as Flickr, Picasa and from e-mail. Built-in speakers are common for playing video content with sound, and many frames also feature
https://en.wikipedia.org/wiki/Regular%20homotopy
In the mathematical field of topology, a regular homotopy refers to a special kind of homotopy between immersions of one manifold in another. The homotopy must be a 1-parameter family of immersions. Similar to homotopy classes, one defines two immersions to be in the same regular homotopy class if there exists a regular homotopy between them. Regular homotopy for immersions is similar to isotopy of embeddings: they are both restricted types of homotopies. Stated another way, two continuous functions are homotopic if they represent points in the same path-components of the mapping space , given the compact-open topology. The space of immersions is the subspace of consisting of immersions, denoted by . Two immersions are regularly homotopic if they represent points in the same path-component of . Examples Any two knots in 3-space are equivalent by regular homotopy, though not by isotopy. The Whitney–Graustein theorem classifies the regular homotopy classes of a circle into the plane; two immersions are regularly homotopic if and only if they have the same turning number – equivalently, total curvature; equivalently, if and only if their Gauss maps have the same degree/winding number. Stephen Smale classified the regular homotopy classes of a k-sphere immersed in – they are classified by homotopy groups of Stiefel manifolds, which is a generalization of the Gauss map, with here k partial derivatives not vanishing. More precisely, the set of regular homotopy classes of embeddings of sphere in is in one-to-one correspondence with elements of group . In case we have . Since is path connected, and and due to Bott periodicity theorem we have and since then we have . Therefore all immersions of spheres and in euclidean spaces of one more dimension are regular homotopic. In particular, spheres embedded in admit eversion if . A corollary of his work is that there is only one regular homotopy class of a 2-sphere immersed in . In particular, this means t
https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20existence%20and%20smoothness
The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering. Even more basic (and seemingly intuitive) properties of the solutions to Navier–Stokes have never been proven. For the three-dimensional system of equations, and given some initial conditions, mathematicians have neither proved that smooth solutions always exist, nor found any counter-examples. This is called the Navier–Stokes existence and smoothness problem. Since understanding the Navier–Stokes equations is considered to be the first step to understanding the elusive phenomenon of turbulence, the Clay Mathematics Institute in May 2000 made this problem one of its seven Millennium Prize problems in mathematics. It offered a US$1,000,000 prize to the first person providing a solution for a specific statement of the problem: The Navier–Stokes equations In mathematics, the Navier–Stokes equations are a system of nonlinear partial differential equations for abstract vector fields of any size. In physics and engineering, they are a system of equations that model the motion of liquids or non-rarefied gases (in which the mean free path is short enough so that it can be thought of as a continuum mean instead of a collection of particles) using continuum mechanics. The equations are a statement of Newton's second law, with the forces modeled according to those in a viscous Newtonian fluid—as the sum of contributions by pressure, viscous stress and an external body force. Since th
https://en.wikipedia.org/wiki/PCB%20Piezotronics
PCB Piezotronics is a manufacturer of piezoelectric sensors. The name "PCB" is abbreviation for "PicoCoulomB" which is technical terminology defining an electrical charge of the type generated by the piezoelectric sensors they manufacture. It is also a registered trademark of the company. "Piezotronics" combines the science of Piezoelectricity and electronics. PCB® manufactures sensors and related instrumentation. Sensors are small electromechanical instruments for the measurement of acceleration, dynamic pressure, force, acoustics, torque, load, strain, shock, vibration and sound. History Founded by Robert W. Lally and James (Jim) F. Lally in 1967, PCB Piezotronics has evolved from a family business to a large company engineering and manufacturing operation, with technical emphasis on the incorporation of integrated circuit-piezoelectric sensor technology. In 1967 the integrated circuit piezoelectric sensor, also known as ICP sensors, incorporated microelectronic circuitry, were developed and marketed. The 1970s for PCB Piezotronics saw expansion of its standard product offerings, to include other types of sensor technologies. In 1971, the company developed a 100,000 g high-shock, ICP® quartz accelerometer; Impulse Hammers for structural excitation were developed in 1972; and in 1973, the first rugged, industrial-grade ICP® accelerometer was introduced to serve the emerging machinery health monitoring market. Employment grew to 25 employees. By 1975, PCB® had become one of the largest U.S. manufacturers of piezoelectric sensors. During the 1980s, PCB® continued to develop new products. In 1982, the Structural* Modal Array Sensing System was developed to ease sensor installation and reduce set-up time on larger-scale modal surveys. Modally-Tuned* Impulse Hammers won the IR-100 Award as one of the top 100 technical developments for 1983. The 128-channel Data Harvester was invented in 1984 to provide sensor power and speed modal analysis by offering automatic
https://en.wikipedia.org/wiki/Red%20Storm%20%28computing%29
Red Storm is a supercomputer architecture designed for the US Department of Energy’s National Nuclear Security Administration Advanced Simulation and Computing Program. Cray, Inc developed it based on the contracted architectural specifications provided by Sandia National Laboratories. The architecture was later commercially produced as the Cray XT3. Red Storm is a partitioned, space shared, tightly coupled, massively parallel processing machine with a high performance 3D mesh network. The processors are commodity AMD Opteron CPUs with off-the-shelf memory DIMMs. The NIC/router combination, called SeaStar, is the only custom ASIC component in the system and uses a PowerPC 440 based core. When deployed in 2005, Red Storm’s initial configuration consisted of 10,880 single-core 2.0 GHz Opterons, of which 10,368 were dedicated for scientific calculations. The remaining 512 Opterons were used to service the computations and also provide the user interface to the system and run a version of Linux. This initial installation consisted of 140 cabinets, taking up of floor space. The Red Storm supercomputer was designed to be highly scalable from a single cabinet to hundreds of cabinets and has been scaled-up twice. In 2006 the system was upgraded to 2.4 GHz Dual-Core Opterons. An additional fifth row of computer cabinets were also brought online resulting in over 26,000 processor cores. This resulted in a peak performance of 124.4 teraflops, or 101.4 running the Linpack benchmark. A second major upgrade in 2008 introduced Cray XT4 technology: Quad-core Opteron processors and an increase in memory to 2 GB per core. This resulted in a peak theoretical performance of 284 teraflops. Top500 performance ranking for Red Storm after each upgrade: November 2005: Rank 6 (36.19 TFLOPS) November 2006: Rank 2 (101.4 TFLOPS) November 2008: Rank 9 (204.2 TFLOPS) Red Storm is intended for capability computing. That is, a single application can be run on the entire system. This is in
https://en.wikipedia.org/wiki/Bending%20moment
In solid mechanics, a bending moment is the reaction induced in a structural element when an external force or moment is applied to the element, causing the element to bend. The most common or simplest structural element subjected to bending moments is the beam. The diagram shows a beam which is simply supported (free to rotate and therefore lacking bending moments) at both ends; the ends can only react to the shear loads. Other beams can have both ends fixed (known as encastre beam); therefore each end support has both bending moments and shear reaction loads. Beams can also have one end fixed and one end simply supported. The simplest type of beam is the cantilever, which is fixed at one end and is free at the other end (neither simple or fixed). In reality, beam supports are usually neither absolutely fixed nor absolutely rotating freely. The internal reaction loads in a cross-section of the structural element can be resolved into a resultant force and a resultant couple. For equilibrium, the moment created by external forces/moments must be balanced by the couple induced by the internal loads. The resultant internal couple is called the bending moment while the resultant internal force is called the shear force (if it is transverse to the plane of element) or the normal force (if it is along the plane of the element). Normal force is also termed as axial force. The bending moment at a section through a structural element may be defined as the sum of the moments about that section of all external forces acting to one side of that section. The forces and moments on either side of the section must be equal in order to counteract each other and maintain a state of equilibrium so the same bending moment will result from summing the moments, regardless of which side of the section is selected. If clockwise bending moments are taken as negative, then a negative bending moment within an element will cause "hogging", and a positive moment will cause "sagging". It is
https://en.wikipedia.org/wiki/ISO%2022000
ISO 22000 is a food safety management system by the International Organization for Standardization (ISO) which is outcome focused, providing requirements for any organization in the food industry with objective to help to improve overall performance in food safety. These standards are intended to ensure safety in the global food supply chain. The standards involve the overall guidelines for food safety management and also focuses on traceability in the feed and food chain. Food safety Food safety refers to all those hazards, whether chronic or acute, that may make food injurious to the health of the consumer. ISO 22000 standard ISO 22000 is the most popular voluntary food safety international standard in the food industry with 51,535 total number of sites (as per the ISO Survey 2022). The ISO 22000 family are international voluntary consensus standards which align to Good Standardization Practices (GSP) and the World Trade Organization (WTO) Principles for the Development of International Standards. Defining the requirements for a Food Safety Management System (FSMS) and incorporating the following elements which as defined as FSMS principles: interactive communication system management prerequisite programs HACCP principles Critical reviews of the above elements have been conducted by many scientists. Communication along the food chain is essential to ensure that all relevant food safety hazards are identified and adequately controlled at each step within the food chain. This implies communication between organizations both upstream and downstream in the food chain. Communication with customers and suppliers about identified hazards and control measures will assist in clarifying customer and supplier requirements. Recognition of the organization's role and position within the food chain is essential to ensure effective interactive communication throughout the chain in order to deliver safe food products to the consumer. ISO 22000 and HACCP ISO 22000 has
https://en.wikipedia.org/wiki/Layer%20%28object-oriented%20design%29
In software object-oriented design, a layer is a group of classes that have the same set of link-time module dependencies to other modules. In other words, a layer is a group of reusable components that are reusable in similar circumstances. In programming languages, the layer distinction is often expressed as "import" dependencies between software modules. Layers are often arranged in a tree-form hierarchy, with dependency relationships as links between the layers. Dependency relationships between layers are often either inheritance, composition or aggregation relationships, but other kinds of dependencies can also be used. Layers is an architectural pattern described in many books, for example Pattern-Oriented Software Architecture See also Abstraction layer Multitier architecture Shearing layers References Object-oriented programming Software design
https://en.wikipedia.org/wiki/FLP-FRT%20recombination
In genetics, Flp-FRT recombination is a site-directed recombination technology, increasingly used to manipulate an organism's DNA under controlled conditions in vivo. It is analogous to Cre-lox recombination but involves the recombination of sequences between short flippase recognition target (FRT) sites by the recombinase flippase (Flp) derived from the 2 µ plasmid of baker's yeast Saccharomyces cerevisiae. The 34bp minimal FRT site sequence has the sequence 5'3' for which flippase (Flp) binds to both 13-bp 5'-GAAGTTCCTATTC-3' arms flanking the 8 bp spacer, i.e. the site-specific recombination (region of crossover) in reverse orientation. FRT-mediated cleavage occurs just ahead from the asymmetric 8bp core region (5'3') on the top strand and behind this sequence on the bottom strand. Several variant FRT sites exist, but recombination can usually occur only between two identical FRTs but generally not among non-identical ("heterospecific") FRTs. Biological function In yeast, this enzyme corrects decreases in 2 µ plasmid copy number caused by rare missegregation events. It does so by causing recombination between the two inverted repetitions on the 2 µ plasmid during DNA replication. This changes the direction of one replication fork, causing multiple rounds of copying in a single initiation. Mutations of the FRT site sequence Senecoff et al. (1987) investigated how nucleotide substitutions within the FRT affected the efficacy of the FLP-mediated recombination. The authors induced base substitutions in either one or both of the FRT sites and tested the concentration of FLP required to observe site-specific recombinations. Every base substitution was performed on each of the thirteen nucleotides within the FRT site (example G to A, T, and C). First, the authors showed that most mutations within the FRT sequence cause minimal effects if present within only one of the two sites. If mutations occurred within both sites, the efficiency of FLP is dramatically reduc
https://en.wikipedia.org/wiki/Operational%20transconductance%20amplifier
The operational transconductance amplifier (OTA) is an amplifier whose differential input voltage produces an output current. Thus, it is a voltage controlled current source (VCCS). There is usually an additional input for a current to control the amplifier's transconductance. The OTA is similar to a standard operational amplifier in that it has a high impedance differential input stage and that it may be used with negative feedback. The first commercially available integrated circuit units were produced by RCA in 1969 (before being acquired by General Electric) in the form of the CA3080. Although most units are constructed with bipolar transistors, field effect transistor units are also produced. The OTA is not as useful by itself in the vast majority of standard op-amp functions as the ordinary op-amp because its output is a current. One of its principal uses is in implementing electronically controlled applications such as variable frequency oscillators and filters and variable gain amplifier stages which are more difficult to implement with standard op-amps. Principal differences from standard operational amplifiers Its output of a current contrasts to that of standard operational amplifier whose output is a voltage. It is usually used "open-loop"; without negative feedback in linear applications. This is possible because the magnitude of the resistance attached to its output controls its output voltage. Therefore, a resistance can be chosen that keeps the output from going into saturation, even with high differential input voltages. Basic operation In the ideal OTA, the output current is a linear function of the differential input voltage, calculated as follows: where Vin+ is the voltage at the non-inverting input, Vin− is the voltage at the inverting input and gm is the transconductance of the amplifier. The amplifier's output voltage is the product of its output current and its load resistance: The voltage gain is then the output voltage divided b
https://en.wikipedia.org/wiki/Contact%20inhibition
In cell biology, contact inhibition refers to two different but closely related phenomena: contact inhibition of locomotion (CIL) and contact inhibition of proliferation (CIP). CIL refers to the avoidance behavior exhibited by fibroblast-like cells when in contact with one another. In most cases, when two cells contact each other, they attempt to alter their locomotion in a different direction to avoid future collision. When collision is unavoidable, a different phenomenon occurs whereby growth of the cells of the culture itself eventually stops in a cell-density dependent manner. Both types of contact inhibition are well-known properties of normal cells and contribute to the regulation of proper tissue growth, differentiation, and development. It is worth noting that both types of regulation are normally negated and overcome during organogenesis during embryonic development and tissue and wound healing. However, contact inhibition of locomotion and proliferation are both aberrantly absent in cancer cells, and the absence of this regulation contributes to tumorigenesis. Mechanism Contact inhibition is a regulatory mechanism that functions to keep cells growing into a layer one cell thick (a monolayer). If a cell has plenty of available substrate space, it replicates rapidly and moves freely. This process continues until the cells occupy the entire substratum. At this point, normal cells will stop replicating. As motile cells come into contact in confluent cultures, they exhibit decreased mobility and mitotic activity over time. Exponential growth has been shown to occur between colonies in contact for numerous days, with the inhibition of mitotic activity occurring far later. This delay between cell-cell contact and onset of proliferation inhibition is shortened as the culture becomes more confluent. Thus, it may be reasonably concluded that cell-cell contact is an essential condition for contact inhibition of proliferation, but is by itself insufficient for mit
https://en.wikipedia.org/wiki/August%20Adler
August Adler (24 January 1863, Opava, Austrian Silesia – 17 October 1923, Vienna) was a Czech and Austrian mathematician noted for using the theory of inversion to provide an alternate proof of Mascheroni's compass and straightedge construction theorem. External links 1863 births 1923 deaths People from Opava People from Austrian Silesia Geometers 19th-century Austrian mathematicians Czechoslovak mathematicians Mathematicians from Austria-Hungary
https://en.wikipedia.org/wiki/Shotgun%20lipidomics
In lipidomics, the process of shotgun lipidomics (named by analogy with shotgun sequencing) uses analytical chemistry to investigate the biological function, significance, and sequelae of alterations in lipids and protein constituents mediating lipid metabolism, trafficking, or biological function in cells. Lipidomics has been greatly facilitated by recent advances in, and novel applications of, electrospray ionization mass spectrometry (ESI/MS). Lipidomics is a research field that studies the pathways and networks of cellular lipids in biological systems (i.e., lipidomes) on a large scale. It involves the identification and quantification of the thousands of cellular lipid molecular species and their interactions with other lipids, proteins, and other moieties in vivo. Investigators in lipidomics examine the structures, functions, interactions, and dynamics of cellular lipids and the dynamic changes that occur during pathophysiologic perturbations. Lipidomic studies play an essential role in defining the biochemical mechanisms of lipid-related disease processes through identifying alterations in cellular lipid metabolism, trafficking and homeostasis. The two major platforms currently used for lipidomic analyses are HPLC-MS and shotgun lipidomics. History Shotgun lipidomics was developed by Richard W. Gross and Xianlin Han, by employing ESI intrasource separation techniques. Individual molecular species of most major and many minor lipid classes can be fingerprinted and quantitated directly from biological lipid extracts without the need for chromatographic purification. Advantages Shotgun lipidomics is fast, highly sensitive, and it can identify hundreds of lipids missed by other methods — all with a much smaller tissue sample so that specific cells or minute biopsy samples can be examined. References Further reading Gunning for fats Biochemistry methods
https://en.wikipedia.org/wiki/Deterministic%20acyclic%20finite%20state%20automaton
In computer science, a deterministic acyclic finite state automaton (DAFSA), also called a directed acyclic word graph (DAWG; though that name also refers to a related data structure that functions as a suffix index) is a data structure that represents a set of strings, and allows for a query operation that tests whether a given string belongs to the set in time proportional to its length. Algorithms exist to construct and maintain such automata, while keeping them minimal. A DAFSA is a special case of a finite state recognizer that takes the form of a directed acyclic graph with a single source vertex (a vertex with no incoming edges), in which each edge of the graph is labeled by a letter or symbol, and in which each vertex has at most one outgoing edge for each possible letter or symbol. The strings represented by the DAFSA are formed by the symbols on paths in the graph from the source vertex to any sink vertex (a vertex with no outgoing edges). In fact, a deterministic finite state automaton is acyclic if and only if it recognizes a finite set of strings. Comparison to tries By allowing the same vertices to be reached by multiple paths, a DAFSA may use significantly fewer vertices than the strongly related trie data structure. Consider, for example, the four English words "tap", "taps", "top", and "tops". A trie for those four words would have 12 vertices, one for each of the strings formed as a prefix of one of these words, or for one of the words followed by the end-of-string marker. However, a DAFSA can represent these same four words using only six vertices vi for 0 ≤ i ≤ 5, and the following edges: an edge from v0 to v1 labeled "t", two edges from v1 to v2 labeled "a" and "o", an edge from v2 to v3 labeled "p", an edge v3 to v4 labeled "s", and edges from v3 and v4 to v5 labeled with the end-of-string marker. There is a tradeoff between memory and functionality, because a standard DAFSA can tell you if a word exists within it, but it cannot point you to
https://en.wikipedia.org/wiki/Pieter%20Hendrik%20Schoute
Pieter Hendrik Schoute (21 January 1846, Wormerveer – 18 April 1913, Groningen) was a Dutch mathematician known for his work on regular polytopes and Euclidean geometry. He started his career as a civil engineer, but became a professor of mathematics at Groningen and published some thirty papers on polytopes between 1878 and his death in 1913. He collaborated with Alicia Boole Stott on describing the sections of the regular 4-polytopes. In 1886, he became member of the Royal Netherlands Academy of Arts and Sciences. Citations References Pieter Hendrik Schoute, Analytical treatment of the polytopes regularly derived from the regular polytopes., 1911, published by J. Muller in Amsterdam, Written in English. - 82 pages External links 19th-century Dutch mathematicians 20th-century Dutch mathematicians Geometers 1846 births 1923 deaths Members of the Royal Netherlands Academy of Arts and Sciences People from Zaanstad Delft University of Technology alumni
https://en.wikipedia.org/wiki/List%20of%20phylogenetics%20software
This list of phylogenetics software is a compilation of computational phylogenetics software used to produce phylogenetic trees. Such tools are commonly used in comparative genomics, cladistics, and bioinformatics. Methods for estimating phylogenies include neighbor-joining, maximum parsimony (also simply referred to as parsimony), UPGMA, Bayesian phylogenetic inference, maximum likelihood and distance matrix methods. List See also List of phylogenetic tree visualization software References External links Complete list of Institut Pasteur phylogeny webservers ExPASy List of phylogenetics programs A very comprehensive list of phylogenetic tools (reconstruction, visualization, etc.) Another list of evolutionary genetics software A list of phylogenetic software provided by the Zoological Research Museum A. Koenig MicrobeTrace available at https://github.com/CDCgov/MicrobeTrace/wiki Genetics databases Phylo Phylogenetics Phylogenetics
https://en.wikipedia.org/wiki/DAML-S
The DARPA agent markup language for services (DAML-S) is a semantic markup language for describing web services and related ontologies. DAML-S is built on top of DAML+OIL. DAML-S has been superseded by OWL-S References Markup languages Web services
https://en.wikipedia.org/wiki/Wilmagate
WilmaGate is a collection of open-source tools for Authentication, Authorization and Accounting on an Open Access Network. It has been initially developed by the Computer Networks and Mobility Group at the University of Trento (Italy). Its development has been part of the locally funded Wilma Project and is now being pursued by the Twelve Project under the name Uni-Fy. It is currently being used for wireless authentication at the Faculty of Science at the University of Trento and by the UniWireless network of Italian research groups participating in the Twelve Project. Features The system has been designed in order to separate the user authentication phase (which is usually performed by a possibly remote ISP) from internet access provided at the user's current location by a local carrier. Therefore, a multiplicity of authentication providers and of access providers is envisioned. The WilmaGate system provides code for both purposes and for a variety of authentication methods. Its modular and object-oriented structure allows programmers to easily add plug-in code for new authentication or accounting protocols. See this article for details. Steps The following steps are performed in a normal user connection. The user's mobile terminal (laptop or PDA) physically connects to a network, either by plugging in a cable (Ethernet or FireWire) or by associating with a wireless access point via Wi-Fi or Bluetooth. The terminal automatically issues a DHCP handshake in order to set up an appropriate configuration for the network it is entering. By this action, the mobile terminal's existence is recognized by the Gateway component. The client starts some form of authentication process, either by opening a web browser and having it redirected to an authentication provider of the admin's choice, or through some pre-installed authentication program. After authentication the client has possibly full Internet access; however, some authentication-based restrictions are ap