source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Frequency%20drift
In electrical engineering, and particularly in telecommunications, frequency drift is an unintended and generally arbitrary offset of an oscillator from its nominal frequency. Causes may include component aging, changes in temperature that alter the piezoelectric effect in a crystal oscillator, or problems with a voltage regulator which controls the bias voltage to the oscillator. Frequency drift is traditionally measured in Hz/s. Frequency stability can be regarded as the absence (or a very low level) of frequency drift. On a radio transmitter, frequency drift can cause a radio station to drift into an adjacent channel, causing illegal interference. Because of this, Frequency allocation regulations specify the allowed tolerance for such oscillators in a type-accepted device. A temperature-compensated, voltage-controlled crystal oscillator (TCVCXO) is normally used for frequency modulation. On the receiver side, frequency drift was mainly a problem in early tuners, particularly for analog dial tuning, and especially on FM, which exhibits a capture effect. However, the use of a phase-locked loop (PLL) essentially eliminates the drift issue. For transmitters, a numerically controlled oscillator (NCO) also does not have problems with drift. Drift differs from Doppler shift, which is a perceived difference in frequency due to motion of the source or receiver, even though the source is still producing the same wavelength. It also differs from frequency deviation, which is the inherent and necessary result of modulation in both FM and phase modulation. See also Allan variance Clock drift Phase noise Automatic frequency control (AFC) Phase-locked loop (PLL) References Communication circuits Broadcast engineering
https://en.wikipedia.org/wiki/Tapestry%20%28DHT%29
Tapestry is a peer-to-peer overlay network which provides a distributed hash table, routing, and multicasting infrastructure for distributed applications. The Tapestry peer-to-peer system offers efficient, scalable, self-repairing, location-aware routing to nearby resources. Introduction The first generation of peer-to-peer applications, including Napster, Gnutella, had restricting limitations such as a central directory for Napster and scoped broadcast queries for Gnutella limiting scalability. To address these problems a second generation of P2P applications were developed including Tapestry, Chord, Pastry, and CAN. These overlays implement a basic key-based routing mechanism. This allows for deterministic routing of messages and adaptation to node failures in the overlay network. Of the named networks Pastry is very close to Tapestry as they both adopt the same routing algorithm by Plaxton et al. Tapestry is an extensible infrastructure that provides decentralized object location and routing focusing on efficiency and minimizing message latency. This is achieved since Tapestry constructs locally optimal routing tables from initialization and maintains them in order to reduce routing stretch. Furthermore, Tapestry allows object distribution determination according to the needs of a given application. Similarly Tapestry allows applications to implement multicasting in the overlay network. Algorithm API Each node is assigned a unique nodeID uniformly distributed in a large identifier space. Tapestry uses SHA-1 to produce a 160-bit identifier space represented by a 40 digit hex key. Application specific endpoints GUIDs are similarly assigned unique identifiers. NodeIDs and GUIDs are roughly evenly distributed in the overlay network with each node storing several different IDs. From experiments it is shown that Tapestry efficiency increases with network size, so multiple applications sharing the same overlay network increases efficiency. To differentiate between
https://en.wikipedia.org/wiki/Winlink
Winlink, or formally, Winlink Global Radio Email (registered US Service Mark), also known as the Winlink 2000 Network, is a worldwide radio messaging system that uses amateur-band radio frequencies and government frequencies to provide radio interconnection services that include email with attachments, position reporting, weather bulletins, emergency and relief communications, and message relay. The system is built and administered by volunteers and is financially supported by the Amateur Radio Safety Foundation. Network Winlink networking started by providing interconnection services for amateur radio (also known as ham radio). It is well known for its central role in emergency and contingency communications worldwide. The system used to employ multiple central message servers around the world for redundancy, but in 2017–2018 upgraded to Amazon Web Services that provides a geographically-redundant cluster of virtual servers with dynamic load balancers and global content-distribution. Gateway stations have operated on sub-bands of HF since 2013 as the Winlink Hybrid Network, offering message forwarding and delivery through a mesh-like smart network whenever Internet connections are damaged or inoperable. During the late 1990s and late 2000s, it increasingly became what is now the standard network system for amateur radio email worldwide. Additionally, in response to the need for better disaster response communications in the mid to later part of the 2000s, the network was expanded to provide separate parallel radio email networking systems for MARS, UK Cadet, Austrian Red Cross, the US Department of Homeland Security SHARES HF Program, and other groups. Amateur radio HF e-mail Generally, e-mail communications over amateur radio in the 21st century is now considered normal and commonplace. E-mail via high frequency (HF) can be used nearly everywhere on the planet, and is made possible by connecting an HF single sideband (SSB) transceiver system to a computer, mod
https://en.wikipedia.org/wiki/Multiple%20rule-based%20problems
Multiple rule-based problems are problems containing various conflicting rules and restrictions. Such problems typically have an "optimal" solution, found by striking a balance between the various restrictions, without directly defying any of the aforementioned restrictions. Solutions to such problems can either require complex, non-linear thinking processes, or can instead require mathematics-based solutions in which an optimal solution is found by setting the various restrictions as equations, and finding an appropriate maximum value when all equations are added. These problems may thus require more working information as compared to causal relationship problem solving or single rule-based problem solving. The multiple rule-based problem solving is more likely to increase cognitive load than are the other two types of problem solving. References Mathematical analysis
https://en.wikipedia.org/wiki/Ornstein%20isomorphism%20theorem
In mathematics, the Ornstein isomorphism theorem is a deep result in ergodic theory. It states that if two Bernoulli schemes have the same Kolmogorov entropy, then they are isomorphic. The result, given by Donald Ornstein in 1970, is important because it states that many systems previously believed to be unrelated are in fact isomorphic; these include all finite stationary stochastic processes, including Markov chains and subshifts of finite type, Anosov flows and Sinai's billiards, ergodic automorphisms of the n-torus, and the continued fraction transform. Discussion The theorem is actually a collection of related theorems. The first theorem states that if two different Bernoulli shifts have the same Kolmogorov entropy, then they are isomorphic as dynamical systems. The third theorem extends this result to flows: namely, that there exists a flow such that is a Bernoulli shift. The fourth theorem states that, for a given fixed entropy, this flow is unique, up to a constant rescaling of time. The fifth theorem states that there is a single, unique flow (up to a constant rescaling of time) that has infinite entropy. The phrase "up to a constant rescaling of time" means simply that if and are two Bernoulli flows with the same entropy, then for some constant c. The developments also included proofs that factors of Bernoulli shifts are isomorphic to Bernoulli shifts, and gave criteria for a given measure-preserving dynamical system to be isomorphic to a Bernoulli shift. A corollary of these results is a solution to the root problem for Bernoulli shifts: So, for example, given a shift T, there is another shift that is isomorphic to it. History The question of isomorphism dates to von Neumann, who asked if the two Bernoulli schemes BS(1/2, 1/2) and BS(1/3, 1/3, 1/3) were isomorphic or not. In 1959, Ya. Sinai and Kolmogorov replied in the negative, showing that two different schemes cannot be isomorphic if they do not have the same entropy. Specifically, th
https://en.wikipedia.org/wiki/Rakuten%20Advertising
Rakuten Advertising, formerly known as Rakuten Marketing, is an affiliate marketing service provider. The company, in 2005, claimed it was the largest pay-for-performance affiliate marketing network on the Internet. In 2005, Rakuten acquired LinkShare for US$425 million in cash, making LinkShare a wholly owned U.S. division of Rakuten, Inc., a Japanese shopping portal. Rakuten LinkShare was re-branded to Rakuten Affiliate Network in 2014. In 2020, Rakuten Marketing was renamed as Rakuten Advertising. References External links Affiliate marketing Online advertising services and affiliate networks Companies based in New York City Marketing companies established in 1996 Rakuten 2005 mergers and acquisitions
https://en.wikipedia.org/wiki/24%20%28puzzle%29
The 24 puzzle is an arithmetical puzzle in which the objective is to find a way to manipulate four integers so that the end result is 24. For example, for the numbers 4, 7, 8, 8, a possible solution is . The problem has been played as a card game in Shanghai since the 1960s, using playing cards. It has been known by other names, including Maths24. A proprietary version of the game has been created which extends the concept of the basic game to more complex mathematical operations. Original version The original version of 24 is played with an ordinary deck of playing cards with all the face cards removed. The aces are taken to have the value 1 and the basic game proceeds by having 4 cards dealt and the first player that can achieve the number 24 exactly using only allowed operations (addition, subtraction, multiplication, division, and parentheses) wins the hand. Some advanced players allow exponentiation, roots, logarithms, and other operations. For short games of 24, once a hand is won, the cards go to the player that won. If everyone gives up, the cards are shuffled back into the deck. The game ends when the deck is exhausted, and the player with the most cards wins. Longer games of 24 proceed by first dealing the cards out to the players, each of whom contributes to each set of cards exposed. A player who solves a set takes its cards and replenishes their pile, after the fashion of War. Players are eliminated when they no longer have any cards. A slightly different version includes the face cards, Jack, Queen, and King, giving them the values 11, 12, and 13, respectively. In the original version of the game played with a standard 52-card deck, there are four-card combinations. Expansion to more complex operations Additional operations, such as square root and factorial, allow more possible solutions to the game. For instance, a set of 1,1,1,1 would be impossible to solve with only the five basic operations. However, with the use of factorials, it is
https://en.wikipedia.org/wiki/Terbium%28III%29%20oxide
Terbium(III) oxide, also known as terbium sesquioxide, is a sesquioxide of the rare earth metal terbium, having chemical formula . It is a p-type semiconductor, which conducts protons, which is enhanced when doped with calcium. It may be prepared by the reduction of in hydrogen at 1300 °C for 24 hours. It is a basic oxide and easily dissolved to dilute acids, and then almost colourless terbium salt is formed. Tb2O3 + 6 H+ → 2 Tb3+ + 3 H2O The crystal structure is cubic and the lattice constant is a = 1057 pm. References Terbium compounds Sesquioxides Semiconductor materials
https://en.wikipedia.org/wiki/Plena%20Ilustrita%20Vortaro%20de%20Esperanto
Plena Ilustrita Vortaro de Esperanto (PIV; Complete Illustrated Dictionary of Esperanto) is a monolingual dictionary of the language Esperanto. It was first compiled in 1970 by a large team of Esperanto linguists and specialists under the guidance of Gaston Waringhien and is published by the Sennacieca Asocio Tutmonda (SAT). It may be consulted online for free. The term "illustrated" refers to two features: 1 - The use of clipart-like symbols rather than abbreviations for certain purposes (eg, entries pertaining to agriculture are marked with a small image of a sickle rather than a note like "Agri." for "Agrikulturo".) 2 - The occasional use of a line-art sketch illustrating the item being defined. These sketches are not used for most entries. The entries that do have a sketch are most commonly plants and animals, and sometimes tools. History Original publication First published in 1970, the PIV has undergone two revisions to date and is considered by many to be something of a standard for Esperanto, thanks mainly to its unchallenged scope—15,200 words and 39,400 lexical units. However, it is also criticized as excessively influenced by the French language and politically biased. Moreover, its few and often outmoded illustrations appeared only as an appendix. Supplement of 1987 In 1987, a supplement was separately published, produced under the guidance of Gaston Waringhien and Roland Levreaud. It covered approximately 1000 words and 1300 lexical units. 2002 and 2005 editions In 2002, after many years of work, a new revised edition appeared with the title La Nova Plena Ilustrita Vortaro de Esperanto (The New PIV), also dubbed PIV2 or PIV2002. Its chief editor was Michel Duc-Goninaz. PIV2002 (much like PIV2005) includes 16,780 words and 46,890 lexical units. Its illustrations are no longer located on the last pages, but rather are incorporated into the text itself. The edition was first presented to the SAT congress in Alicante, Spain in July 2002. The stock
https://en.wikipedia.org/wiki/Poussin%20proof
In number theory, the Poussin proof is the proof of an identity related to the fractional part of a ratio. In 1838, Peter Gustav Lejeune Dirichlet proved an approximate formula for the average number of divisors of all the numbers from 1 to n: where d represents the divisor function, and γ represents the Euler-Mascheroni constant. In 1898, Charles Jean de la Vallée-Poussin proved that if a large number n is divided by all the primes up to n, then the average fraction by which the quotient falls short of the next whole number is γ: where {x} represents the fractional part of x, and π represents the prime-counting function. For example, if we divide 29 by 2, we get 14.5, which falls short of 15 by 0.5. References Dirichlet, G. L. "Sur l'usage des séries infinies dans la théorie des nombres", Journal für die reine und angewandte Mathematik 18 (1838), pp. 259–274. Cited in MathWorld article "Divisor Function" below. de la Vallée Poussin, C.-J. Untitled communication. Annales de la Societe Scientifique de Bruxelles 22 (1898), pp. 84–90. Cited in MathWorld article "Euler-Mascheroni Constant" below. External links Number theory
https://en.wikipedia.org/wiki/Game%20accessibility
Within the field of human–computer interaction, accessibility of video games is considered a sub-field of computer accessibility, which studies how software and computers can be made accessible to users with various types of impairments. It can also include tabletop RPGs, board games, and related products. In spring 2020, the COVID-19 pandemic caused a massive boom of the video game industry. With an increasing number of people interested in playing video games and with video games increasingly being used for other purposes than entertainment, such as education, rehabilitation or health, game accessibility has become an emerging field of research, especially as players with disabilities could benefit from the opportunities video games offer the most. A 2010 study estimated that 2% of the U.S. population is unable to play a game at all because of an impairment and 9% can play games but suffers from a reduced gaming experience. A study conducted by casual games studio PopCap games found that an estimated one in five casual video gamers have a physical, mental or developmental disability. As games are increasingly used as education tools, there may be a legal obligation to make them accessible, as Section 508 of the Rehabilitation Act mandates that schools and universities that rely on federal funding must make their electronic and information technologies accessible. , the U.S. Federal Communications Commission (FCC) requires in-game communication between players on consoles to be accessible to players with sensory disabilities. In 2021, video game developers attempted to improve accessibility through every possible avenue. This includes reducing difficulty and enabling auto fire. Outside of being used as education or rehabilitation tools video games are used as identification aspects leading disabled people to work much harder to attach additional meaning when gaming. This transforms the very nature of playing video games into a fight against a digitally divided c
https://en.wikipedia.org/wiki/Current%20differencing%20buffered%20amplifier
A current differencing buffered amplifier (CDBA) is a multi-terminal active component with two inputs and two outputs and developed by Cevdet Acar and Serdar Özoğuz. Its block diagram can be seen from the figure. It is derived from the current feedback amplifier (CFA). Basic operation The characteristic equation of this element can be given as: , , . Here, the current through the z-terminal follows the difference between the currents through p-terminal and n-terminal. Input terminals p and n are internally grounded. The difference of the input currents is converted into the output voltage Vw, therefore CDBA element can be considered as a special type of current feedback amplifier with differential current input and grounded y input. The CDBA is simplifies the implementation, is free from parasitic capacitances, able to operate in the frequency range of more than hundreds of MHz (even GHz!), and suitable for current mode operation while, it also provides a voltage output. Several voltage and current mode continuous-time filters, oscillators, analog multipliers, inductance simulators and a PID controller have been developed using this active element. References Acar, C., and Ozoguz, S., “A new versatile building block: current differencing buffered amplifier suitable for analog signal processing filters”, Microelectronics Journal, vol. 30, pp. 157–160, 1999. Ali Ümit Keskin, "A Four Quadrant Analog Multiplier employing single CDBA", Analog Integrated Circuits and Signal Processing, vol. 40, no. 1, pp. 99–101, 2004. Tangsrirat, W., Klahan, K., Kaewdang, K., and Surakampontorn, W., “Low-Voltage Wide-Band NMOS-Based Current Differencing Buffered Amplifier” ECTI Transactions on Electrical Eng., Electronics, and Communications, vol. 2, no. 1, pp. 15–22, 2004. Electronic amplifiers
https://en.wikipedia.org/wiki/Ethyl%20methylphenylglycidate
Ethyl methylphenylglycidate, commonly known as strawberry aldehyde, is an organic compound used in the flavor industry in artificial fruit flavors, in particular strawberry. Uses Because of its pleasant taste and aroma, ethyl methylphenylglycidate finds use in the fragrance industry, in artificial flavors, and in cosmetics. Its end applications include perfumes, soaps, beauty care products, detergents, pharmaceuticals, baked goods, candies, ice cream, and others. Chemistry Ethyl methylphenylglycidate contains ester and epoxide functional groups, despite its common name, lacks presence of an aldehyde. It is a colourless liquid that is insoluble in water. Ethyl methylphenylglycidate is usually prepared by the condensation of acetophenone and the ethyl ester of monochloroacetic acid in the presence of a base, in a reaction known as the Darzens condensation. Safety Long-term, high-dose studies in rats have demonstrated that ethyl methylphenylglycidate has no significant adverse health effects and is not carcinogenic. The US Food and Drug Administration has classified ethyl methylphenylglycidate as generally recognized as safe (GRAS). See also List of strawberry topics References Ethyl esters Epoxides Flavors Food additives Perfume ingredients Strawberries
https://en.wikipedia.org/wiki/Mark%20Kryder
Mark Howard Kryder (born October 7, 1943 in Portland, Oregon) was Seagate Corp.'s senior vice president of research and chief technology officer. Kryder holds a Bachelor of Science degree in electrical engineering from Stanford University and a Ph.D. in electrical engineering and physics from the California Institute of Technology. Kryder was elected a member of the National Academy of Engineering in 1994 for contributions to the understanding of magnetic domain behavior and for leadership in information storage research. He is known for "Kryder's law", an observation from the mid-2000s about the increasing capacity of magnetic hard drives. Kryder's law projection A 2005 Scientific American article, titled "Kryder's Law", described Kryder's observation that magnetic disk areal storage density was then increasing at a rate exceeding Moore's Law. The pace was then much faster than the two-year doubling time of semiconductor chip density posited by Moore's law. In 2005, commodity drive density of 110 Gbit/in2 (170 Mbit/mm2) had been reached, up from 100 Mbit/in2 (155 Kbit/mm2) circa 1990. This does not extrapolate back to the initial 2 kilobit/in2 (3.1 bit/mm2) drives introduced in 1956, as growth rates surged during the latter 15-year period. In 2009, Kryder projected that if hard drives were to continue to progress at their then-current pace of about 40% per year, then in 2020 a two-platter, 2.5-inch disk drive would store approximately 40 terabytes (TB) and cost about $40. The validity of the Kryder's law projection of 2009 was questioned halfway into the forecast period, and some called the actual rate of areal density progress the "Kryder rate". As of 2014, the observed Kryder rate had fallen well short of the 2009 forecast of 40% per year. A single 2.5-inch platter stored around 0.3 terabytes in 2009 and this reached 0.6 terabytes in 2014. The Kryder rate over the five years ending in 2014 was around 15% per year. To reach 20 terabytes by 2020, starting in
https://en.wikipedia.org/wiki/Tom%20Maibaum
Thomas Stephen Edward Maibaum Fellow of the Royal Society of Arts (FRSA) is a computer scientist. Maibaum has a Bachelor of Science (B.Sc.) undergraduate degree in pure mathematics from the University of Toronto, Canada (1970), and a Doctor of Philosophy (Ph.D.) in computer science from Queen Mary and Royal Holloway Colleges, University of London, England (1974). Maibaum has held academic posts at Imperial College, London, King's College London (UK) and McMaster University (Canada). His research interests have concentrated on the theory of specification, together with its application in different contexts, in the general area of software engineering. From 1996 to 2005, he was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He is a Fellow of the Institution of Engineering and Technology and the Royal Society of Arts. References External links KCL home page , McMaster University Living people 20th-century Hungarian people Hungarian expatriates in Canada University of Toronto alumni Hungarian expatriates in the United Kingdom Alumni of Queen Mary University of London Alumni of Royal Holloway, University of London Academics of Imperial College London Academics of King's College London Hungarian computer scientists Formal methods people Academic staff of McMaster University Fellows of the Institution of Engineering and Technology Year of birth missing (living people)
https://en.wikipedia.org/wiki/Digital%20credential
Digital credentials are the digital equivalent of paper-based credentials. Just as a paper-based credential could be a passport, a driver's license, a membership certificate or some kind of ticket to obtain some service, such as a cinema ticket or a public transport ticket, a digital credential is a proof of qualification, competence, or clearance that is attached to a person. Also, digital credentials prove something about their owner. Both types of credentials may contain personal information such as the person's name, birthplace, birthdate, and/or biometric information such as a picture or a finger print. Because of the still evolving, and sometimes conflicting, terminologies used in the fields of computer science, computer security, and cryptography, the term "digital credential" is used quite confusingly in these fields. Sometimes passwords or other means of authentication are referred to as credentials. In operating system design, credentials are the properties of a process (such as its effective UID) that is used for determining its access rights. On other occasions, certificates and associated key material such as those stored in PKCS#12 and PKCS#15 are referred to as credentials. Digital badges are a form of digital credential that indicate an accomplishment, skill, quality or interest. Digital badges can be earned in a variety of learning environments. Digital cash Money, in general, is not regarded as a form of qualification that is inherently linked to a specific individual, as the value of token money is perceived to reside independently. However, the emergence of digital assets, such as digital cash, has introduced a new set of challenges due to their susceptibility to replication. Consequently, digital cash protocols have been developed with additional measures to mitigate the issue of double spending, wherein a coin is used for multiple transactions. Credentials, on the other hand, serve as tangible evidence of an individual's qualifications or a
https://en.wikipedia.org/wiki/Front%20Line%20%28video%20game%29
is a military-themed run and gun video game released by Taito for arcades in November 1982. It was one of the first overhead run and gun games, a precursor to many similarly-themed games of the mid-to-late 1980s. Front Line is controlled with a joystick, a single button, and a rotary dial that can be pushed in like a button. The single button is used to throw grenades and to enter and exit tanks, while the rotary dial aims and fires the player's gun. The game was created by Tetsuya Sasaki. It was a commercial success in Japan, where it was the seventh highest-grossing arcade game of 1982. However, it received a mixed critical and commercial reception in Western markets, with praise for its originality but criticism for its difficulty. The game's overhead run and gun formula preceded Capcom's Commando (1985) by several years. The SNK shooters TNK III (1985) and Ikari Warriors (1986) follow conventions established by Front Line, including the vertically scrolling levels, entering/exiting tanks, and not dying when an occupied tank is destroyed. Gameplay Playing as a lone soldier, the player's ultimate objective is to lob a hand grenade into the enemy's fort, first by fighting off infantry units and then battling tanks before finally reaching the opponent's compound. The player begins with two weapons: a pistol and grenades, with no ammo limit. Once the player has advanced far enough into enemy territory, there is a "tank warfare" stage in which the player can hijack a tank to fight off other enemy tanks. There are two types of tanks available: a light tank armed with a machine gun and a heavy tank armed with a cannon. The light tank is more nimble, but can be easily destroyed by the enemy. The heavy tank is slower, but can sustain one hit from a light tank; a second hit from a light tank will destroy it. A single shot from a heavy tank will destroy either type of tank. If a partially damaged tank is evacuated, the player can jump back in and resume its normal oper
https://en.wikipedia.org/wiki/Xybernaut
Xybernaut Corporation was a maker of wearable mobile computing hardware, software, and services. Its products included the Atigo tablet PC, Poma wearable computer, and the MA-V wearable computer. The company was headquartered in Fairfax, Virginia, until 2006, when it moved to Chantilly, Virginia. Although its first wearable computer, the Poma, created an initial stir when introduced in 2002, the slowness and disconcerting appearance of the product would land it on "worst tech fail" lists by the turn of the decade. Although surviving a bankruptcy, by 2017 Xybernaut had collapsed under financial scandal and regulatory and criminal strictures. Early history The company was founded in 1990 as Computer Products & Services Incorporated (CPSI) by Edward G. Newman. In 1994, Newman's brother, Steven A. Newman, became the president of the company. The company had its Initial public offering in 1996 under the new name Xybernaut. It subsequently posted 33 consecutive quarterly losses, despite repeated promises by the Newmans that profitability was right around the corner. In mid-1998, former Virginia governor George Allen joined the company's board of directors. He remained on the board until December 2000, resigning after he was elected a U.S. Senator the month before. In 1998 and 1999, McGuire Woods LLP, the law firm that Allen was a partner of, billed $315,925 to Xybernaut for legal work. Allen remained on the Xybernaut board until December 2000. He was granted 110,000 options of company stock that, at their peak, were worth $1.5 million, but he never exercised those options, which expired 90 days after he left the board. In September 1999, the company's board dismissed its accounting firm, PricewaterhouseCoopers, which had issued a report with a "going concern" paragraph that questioned the company’s financial health. This was just one of many signs that the Newman brothers discouraged transparency in company accounting practices. Fraud charges and bankruptcy In
https://en.wikipedia.org/wiki/Acorn%20nut
An acorn nut, also referred to as crown hex nut, blind nut, cap nut, domed cap nut, or dome nut (UK), is a nut that has a domed end on one side. When used together with a threaded fastener with an external male thread, the domed end encloses the external thread, either to protect the thread or to protect nearby objects from contact with the thread. In addition, the dome gives a more finished appearance. Acorn nuts are usually made of brass, steel, stainless steel (low carbon content) or nylon. They can also be chrome plated and given a mirror finish. There are two types of acorn nuts. One is low, or the standard acorn nut. The other is the high acorn nut. The high acorn nut is wider and higher and will protect extra long studs. There are also self-locking acorn nuts that have distorted threads in the hex area to create a tight friction fit to prevent the nut from vibrating loose. There are standards governing the manufacture of acorn nuts. One is Society of Automotive Engineers (SAE) Standard J483, High and Low Crown (Blind, Acorn) Hex Nuts. Another is Deutsches Institut für Normung (DIN) 1587, Hexagon Domed Cap Nuts. References Nuts (hardware)
https://en.wikipedia.org/wiki/Problem%20of%20future%20contingents
Future contingent propositions (or simply, future contingents) are statements about states of affairs in the future that are contingent: neither necessarily true nor necessarily false. The problem of future contingents seems to have been first discussed by Aristotle in chapter 9 of his On Interpretation (De Interpretatione), using the famous sea-battle example. Roughly a generation later, Diodorus Cronus from the Megarian school of philosophy stated a version of the problem in his notorious master argument. The problem was later discussed by Leibniz. The problem can be expressed as follows. Suppose that a sea-battle will not be fought tomorrow. Then it was also true yesterday (and the week before, and last year) that it will not be fought, since any true statement about what will be the case in the future was also true in the past. But all past truths are now necessary truths; therefore it is now necessarily true in the past, prior and up to the original statement "A sea battle will not be fought tomorrow", that the battle will not be fought, and thus the statement that it will be fought is necessarily false. Therefore, it is not possible that the battle will be fought. In general, if something will not be the case, it is not possible for it to be the case. "For a man may predict an event ten thousand years beforehand, and another may predict the reverse; that which was truly predicted at the moment in the past will of necessity take place in the fullness of time" (De Int. 18b35). This conflicts with the idea of our own free choice: that we have the power to determine or control the course of events in the future, which seems impossible if what happens, or does not happen, is necessarily going to happen, or not happen. As Aristotle says, if so there would be no need "to deliberate or to take trouble, on the supposition that if we should adopt a certain course, a certain result would follow, while, if we did not, the result would not follow". Aristotle's solution
https://en.wikipedia.org/wiki/Whitney%20extension%20theorem
In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney. Statement A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem. Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, y ∈ Rn, there is a function Rα(x,y) approaching 0 uniformly as x,y → a such that where the sum is over multi-indices α. Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields where Rα is o(|x − y|m−|α|) uniformly as x,y → a. Note that () may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement: Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with satisfying the compatibility condition () at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that: F = f0 on A. DαF = fα on A. F is real-analytic at every point of Rn − A. Proofs are given in the original paper of , and in , and . Extension in a half space proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28probability%20theory%29
In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments (cf. the problem of moments). If for all values of k, then the sequence {Xn} converges to X in distribution. The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices. Notes Moment (mathematics)
https://en.wikipedia.org/wiki/Ergodicity
In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity. Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space. Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients. The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis. Informal explanation Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical de
https://en.wikipedia.org/wiki/List%20of%20Java%20APIs
There are two types of Java programming language application programming interfaces (APIs): The official core Java API, contained in the Android (Google), SE (OpenJDK and Oracle), MicroEJ. These packages (java.* packages) are the core Java language packages, meaning that programmers using the Java language had to use them in order to make any worthwhile use of the Java language. Optional APIs that can be downloaded separately. The specification of these APIs are defined according to many different organizations in the world (Alljoyn, OSGi, Eclipse, JCP, E-S-R, etc.). The following is a partial list of application programming interfaces (APIs) for Java. APIs Following is a very incomplete list, as the number of APIs available for the Java platform is overwhelming. Rich client platforms Eclipse Rich Client Platform (RCP) NetBeans Platform Office_compliant libraries Apache POI JXL - for Microsoft Excel JExcel - for Microsoft Excel Compression LZMA SDK, the Java implementation of the SDK used by the popular 7-Zip file archive software (available here) JSON Jackson (API) Game engines Slick jMonkey Engine JPCT Engine LWJGL Real-time libraries Real time Java is a catch-all term for a combination of technologies that allows programmers to write programs that meet the demands of real-time systems in the Java programming language. Java's sophisticated memory management, native support for threading and concurrency, type safety, and relative simplicity have created a demand for its use in many domains. Its capabilities have been enhanced to support real time computational needs: Java supports a strict priority based threading model. Because Java threads support priorities, Java locking mechanisms support priority inversion avoidance techniques, such as priority inheritance or the priority ceiling protocol. To overcome typical real time difficulties, the Java Community introduced a specification for real-time Java, JSR001. A number of implementations
https://en.wikipedia.org/wiki/SCVP
The Server-based Certificate Validation Protocol (SCVP) is an Internet protocol for determining the path between an X.509 digital certificate and a trusted root (Delegated Path Discovery) and the validation of that path (Delegated Path Validation) according to a particular validation policy. Overview When a relying party receives a digital certificate and needs to decide whether to trust the certificate, it first needs to determine whether the certificate can be linked to a trusted certificate. This process may involve chaining the certificate back through several issuers, such as the following case: Equifax Secure eBusiness CA-1 ACME Co Certificate Authority Joe User Currently, the creation of this chain of certificates is performed by the application receiving the signed message. The process is termed "path discovery" and the resulting chain is called a "certification path". Many Windows applications, such as Outlook, use Cryptographic Application Programming Interface (CAPI) for path discovery. CAPI is capable of building certification paths using any certificates that are installed in Windows certificate stores or provided by the relying party application. The Equifax CA certificate, for example, comes installed in Windows as a trusted certificate. If CAPI knows about the ACME Co CA certificate or if it is included in a signed email and made available to CAPI by Outlook, CAPI can create the certification path above. However, if CAPI cannot find the ACME Co CA certificate, it has no way to verify that Joe User is trusted. SCVP provides us with a standards-based client-server protocol for solving this problem using Delegated Path Discovery, or DPD. When using DPD, a relying party asks a server for a certification path that meets its needs. The SCVP client's request contains the certificate that it is attempting to trust and a set of trusted certificates. The SCVP server's response contains a set of certificates making up a valid path betwee
https://en.wikipedia.org/wiki/Ka/Ks%20ratio
{{DISPLAYTITLE:Ka/Ks ratio}} In genetics, the Ka/Ks ratio, also known as ω or dN/dS ratio, is used to estimate the balance between neutral mutations, purifying selection and beneficial mutations acting on a set of homologous protein-coding genes. It is calculated as the ratio of the number of nonsynonymous substitutions per non-synonymous site (Ka), in a given period of time, to the number of synonymous substitutions per synonymous site (Ks), in the same period. The latter are assumed to be neutral, so that the ratio indicates the net balance between deleterious and beneficial mutations. Values of Ka/Ks significantly above 1 are unlikely to occur without at least some of the mutations being advantageous. If beneficial mutations are assumed to make little contribution, then Ka/Ks estimates the degree of evolutionary constraint. Context Selection acts on variation in phenotypes, which are often the result of mutations in protein-coding genes. The genetic code is written in DNA sequences as codons, groups of three nucleotides. Each codon represents a single amino acid in a protein chain. However, there are more codons (64) than amino acids found in proteins (20), so many codons are effectively synonyms. For example, the DNA codons TTT and TTC both code for the amino acid Phenylalanine, so a change from the third T to C makes no difference to the resulting protein. On the other hand, the codon GAG codes for Glutamic acid while the codon GTG codes for Valine, so a change from the middle A to T does change the resulting protein, for better or (more likely) worse, so the change is not a synonym. These changes are illustrated in the tables below. The Ka/Ks ratio measures the relative rates of synonymous and nonsynonymous substitutions at a particular site. Methods Methods for estimating Ka and Ks use a sequence alignment of two or more nucleotide sequences of homologous genes that code for proteins (rather than being genetic switches, controlling development or the rate
https://en.wikipedia.org/wiki/Fraunhofer%20Institute%20for%20Applied%20Optics%20and%20Precision%20Engineering
The Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), also referred to as the Fraunhofer IOF, is an institute of the Fraunhofer Society for the Advancement of Applied Research (FHG). The institute is based in Jena. Its activities are attributed to applied research and development in the branch of natural sciences in the field of optics and precision engineering. The institute was founded in 1992. Research and development Building upon the experience of the Jena region in the field of surface and thin film technologies for optics, the Fraunhofer IOF conducts research and development in the area of optical systems aiming at enhancing the control of light – from its generation and manipulation to its actual use. The combination of competences in the areas of optics and precision engineering is particularly important. The focuses also result in the department structure: Opto-mechanical System Design Micro and Nano-structured Optics Opto-mechatronical Components and Systems Precision Optical Components and Systems Functional Optical Surfaces and Layers Laser- and Fiber Technology Imaging and Sensing Emerging Technologies see also: thin film technology, surface physics, microstructure technology, nanotechnology, micro-optics, measurement technology, quantum technology CMN-Optics In July 2006, the Fraunhofer IOF opened the Center for Advanced Micro- and Nano-Optics (CMN-Optics). The core of the facility is the SB350-OS electron beam lithography system. This device, also known as an "electron beam recorder", allows minimal structure sizes in the range of 50 nm with a high accuracy on substrate sizes up to 300 mm. The center is operated jointly with the Institute for Applied Physics (IAP) of the Friedrich Schiller University of Jena. The facility is also used by the Institute for Photonic Technologies (IPHT), Jena. The facility cost twelve million euros and was financed by the European Union, the Free State of Thuringia and the Fraunho
https://en.wikipedia.org/wiki/Thai%20Institute%20of%20Chemical%20Engineering%20and%20Applied%20Chemistry
The Thai Institute of Chemical Engineering and Applied Chemistry (TIChE) () is a professional organization for chemical engineers. TIChE was established in 1996 to distinguish chemical engineers as a profession independent of chemists and mechanical engineers. History TIChE was established to force the chemical engineering professional certificate isolated from the industrial engineering. The conference in 1990 was the first effort to establish the organization by the cooperation of Department of Chemical Engineering and Department of Chemical Technology, Chulalongkorn University, and Department of Chemical Engineering, King Mongkut's University of Technology Thonburi. In the 4th conference at Khon Kaen University, 1994, TIChE was formally established and permitted by law on November 15, 1996. Now, TIChE composes 18 university members. The Objectives of TIChE To promote and support the chemical engineering and chemical technology profession. To promote and support the educational standard of chemical engineering and chemical technology. To encourage cooperation and industrial development including research and knowledge. To disseminate knowledge and consulting in chemical engineering and chemical technology. To be an agent of chemical engineering and chemical technology profession to cooperate with other organizations. University Members (sorted alphabetically) Burapha University Department of Chemical Engineering Chiang Mai University Department of Industrial Chemistry Chulalongkorn University Department of Chemical Engineering Department of Chemical Technology The Petroleum and Petrochemical College Kasetsart University Department of Chemical Engineering Khon Kaen University Department of Chemical Engineering King Mongkut's Institute of Technology Ladkrabang Department of Chemical Engineering King Mongkut's University of Technology North Bangkok Department of Chemical Engineering Department of Industrial Chemistry King Mongkut's Univer
https://en.wikipedia.org/wiki/Bear%20JJ1
Bear JJ1 (2004 – 26 June 2006) was a brown bear whose travels and exploits in Austria and Germany in the first half of 2006 drew international attention. JJ1, also known as Bruno in the German press (some newspapers also gave the bear different names, such as Beppo or Petzi), is believed to have been the first brown bear on German soil in 170 years. Origin JJ1 was originally part of an EU-funded €1 million conservation project in Italy, but had walked across to Austria and into Germany. A spokesman said that there had been "co-ordination" between Italy, Austria and Slovenia to ensure the bear's welfare but apparently Germany had not been informed. The Life Ursus reintroduction project of the Italian province of Trento had introduced 10 Slovenian bears in the region, monitoring them. JJ1 was the first son of Jurka and Joze (thus the name JJ1); his younger brother JJ3 also showed an aggressive character, wandered into Switzerland in 2008, and was killed there. Because of this second problem the mother Jurka was put in captivity in Italy, despite protests by environmentalists; park authorities maintained that 50% of the incidents involving bears had been caused by Jurka or her descendants. In April 2023, his sister JJ4 killed a 26-year-old jogger in Trentino province of northern Italy. In the summer of 2020, she had already attacked and injured a man and his son on Monte Peller and should have been killed; but animal rights activists had prevented that. Overview Previously, the last sighting of a bear in what is now Germany was recorded in 1838 when hunters shot a bear in Bavaria. Initially heralded as a welcome visitor and a symbol of the success of endangered species reintroduction programs, JJ1's dietary preferences for sheep, chickens, and beehives led government officials to believe that he could become a threat to humans, and they ordered that he be shot or captured. Public objection to the order resulted in its revision, and the German government tried to
https://en.wikipedia.org/wiki/Decision%20tree%20pruning
Pruning is a data compression technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information. Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance. Techniques Pruning processes can be divided into two types (pre- and post-pruning). Pre-pruning procedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion. Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity.
https://en.wikipedia.org/wiki/Tsung
Tsung (formerly known as idx-Tsunami) is a stress testing tool written in the Erlang language and distributed under the GPL license. It can currently stress test HTTP, WebDAV, LDAP, MySQL, PostgreSQL, SOAP and XMPP servers. Tsung can simulate hundreds of simultaneous users on a single system. It can also function in a clustered environment. Features Features include: Several IP addresses can be used on a single machine using the underlying OS's IP Aliasing. OS monitoring (CPU, memory, and network traffic) using SNMP, munin-node agents or Erlang agents on remote servers. Different types of users can be simulated. Dynamic sessions can be described in XML (to retrieve, at runtime, an ID from the server output and use it later in the session). Simulated user thinktimes and the arrival rate can be randomized via probability distribution. HTML reports can be generated during the load to view response time measurements, server CPU, and other statistics. References External links Tsung Project Page Load Testing AWS Kinesis Tsung Information Provided By Process One Performance Measurement & Applications Benchmarking With Erlang. EUC05 Benchmarks (computing) Erlang (programming language) Load testing tools
https://en.wikipedia.org/wiki/Shadow%20%28OS/2%29
In the graphical Workplace Shell (WPS) of the OS/2 operating system, a shadow is an object that represents another object. A shadow is a stand-in for any other object on the desktop, such as a document, an application, a folder, a hard disk, a network share or removable medium, or a printer. A target object can have an arbitrary number of shadows. When double-clicked, the desktop acts the same way as if the original object had been double-clicked. The shadow's context menu is the same as the target object's context menu, with the addition of an "Original" sub-menu, that allows the location of, and explicit operation upon, the original object. A shadow is a dynamic reference to an object. The original may be moved to another place in the file system, without breaking the link. The WPS updates shadows of objects whenever the original target objects are renamed or moved. To do this, it requests notification from the operating system of all file rename operations. (Thus if a target filesystem object is renamed when the WPS is not running, the link between the shadow and the target object is broken.) Similarities to and differences from other mechanisms Shadows are similar in operation to aliases in Mac OS, although there are some differences: Shadows in the WPS are not filesystem objects, as aliases are. They are derived from the WPAbstract class, and thus their backing storage is the user INI file, not a file in the file system. Thus shadows are invisible to applications that do not use the WPS API. The WPS has no mechanism for re-connecting shadows when the link between them and the target object has been broken. (Although where the link has been broken because target objects are temporarily inaccessible, restarting the WPS after the target becomes accessible once more often restores the link.) Shadows are different from symbolic links and shortcuts because they are not filesystem objects, and because shadows are dynamically updated as target objects ar
https://en.wikipedia.org/wiki/Dynamic%20program%20analysis
Dynamic program analysis is analysis of computer software that involves executing the program in question (as opposed to static program analysis, which does not). Dynamic program analysis includes familiar techniques from software engineering such as unit testing, debugging, and measuring code coverage, but also includes lesser-known techniques like program slicing and invariant inference. Dynamic program analysis is widely applied in security in the form of runtime memory error detection, fuzzing, dynamic symbolic execution, and taint tracking. For dynamic program analysis to be effective, the target program must be executed with sufficient test inputs to cover almost all possible outputs. Use of software testing measures such as code coverage helps increase the chance that an adequate slice of the program's set of possible behaviors has been observed. Also, care must be taken to minimize the effect that instrumentation has on the execution (including temporal properties) of the target program. Dynamic analysis is in contrast to static program analysis. Unit tests, integration tests, system tests and acceptance tests use dynamic testing. Types of dynamic analysis Code coverage Computing the code coverage according to a test suite or a workload is a standard dynamic analysis technique. Gcov is the GNU source code coverage program. VB Watch injects dynamic analysis code into Visual Basic programs to monitor code coverage, call stack, execution trace, instantiated objects and variables. Dynamic testing Dynamic testing involves executing a program on a set of test cases. Memory error detection AddressSanitizer: Memory error detection for Linux, macOS, Windows, and more. Part of LLVM. BoundsChecker: Memory error detection for Windows based applications. Part of Micro Focus DevPartner. Dmalloc: Library for checking memory allocation and leaks. Software must be recompiled, and all files must include the special C header file dmalloc.h. Intel Inspector: Dy
https://en.wikipedia.org/wiki/Topological%20entropy
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy. Definition A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent. Definition of Adler, Konheim, and McAndrew Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers. For any continuous map f: X → X, the following limit exists: Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X. Interpretation The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement. then
https://en.wikipedia.org/wiki/Ordinal%20date
An ordinal date is a calendar date typically consisting of a year and an ordinal number, ranging between 1 and 366 (starting on January 1), representing the multiples of a day, called day of the year or ordinal day number (also known as ordinal day or day number). The two parts of the date can be formatted as "YYYY-DDD" to comply with the ISO 8601 ordinal date format. The year may sometimes be omitted, if it is implied by the context; the day may be generalized from integers to include a decimal part representing a fraction of a day. Nomenclature Ordinal date is the preferred name for what was formerly called the "Julian date" or , or , which still seen in old programming languages and spreadsheet software. The older names are deprecated because they are easily confused with the earlier dating system called 'Julian day number' or , which was in prior use and which remains ubiquitous in astronomical and some historical calculations. Calculation Computation of the ordinal day within a year is part of calculating the ordinal day throughout the years from a reference date, such as the Julian date. It is also part of calculating the day of the week, though for this purpose modulo 7 simplifications can be made. In the following text, several algorithms for calculating the ordinal day is presented. The inputs taken are integers , and , for the year, month, and day numbers of the Gregorian or Julian calendar date. Trivial methods The most trivial method of calculating the ordinal day involves counting up all days that have elapsed per the definition: Let O be 0. From , add the length of month to O, taking care of leap year according to the calendar used. Add d to O. Similarly trivial is the use of a lookup table, such as the one referenced. Zeller-like The table of month lengths can be replaced following the method of encoding the month-length variation in Zeller's congruence. As in Zeller, the is changed to if . It can be shown (see below) that for a mont
https://en.wikipedia.org/wiki/K%C5%91nig%27s%20theorem%20%28graph%20theory%29
In the mathematical area of graph theory, Kőnig's theorem, proved by , describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs. It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs. Setting A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges. It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph, the minimum vertex cover set and the maximum matching set have in fact the same size. Statement of the theorem In any bipartite graph, the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover. Example The bipartite graph shown in the above illustration has 14 vertices; a matching with six edges is shown in blue, and a vertex cover with six vertices is shown in red. There can be no smaller vertex cover, because any vertex cover has to include at least one endpoint of each matched edge (as well as of every other edge), so this is a minimum vertex cover. Similarly, there can be no larger matching, because any matched edge has to include at least one endpoint in the vertex cover, so this is a maximum matching. Kőnig's theorem states that the equality between the sizes of the matching and the cover (in this example, both numbers are six) applies more generally to any bipartite graph. Proofs Constructive proof The following proof provides a way of constructing a minimum vertex cover
https://en.wikipedia.org/wiki/Coefficient%20diagram%20method
In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space, where a special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information, and as the criterion of good design. The performance of the closed loop system is monitored by the coefficient diagram. The most considerable advantages of CDM can be listed as follows: The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system. There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom. The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage. It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM. It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given. It is usually required that the controller for a given plant should be designed under some practical limitations. The controller is desired to be of minimum de
https://en.wikipedia.org/wiki/Xylan
Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant biopolymer on Earth, after cellulose and chitin. Composition Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues. Occurrence Plant cell structure Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone. Algae Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae. Food science The quality of cereal flours and the hardness of dough are affected by their xylan content, thus, playing a significant role in bread industry. The main constituent of xylan can be converted into xylitol (a xylose derivative), which
https://en.wikipedia.org/wiki/Stable%20manifold
In mathematics, and in particular the study of dynamical systems, the idea of stable and unstable sets or stable and unstable manifolds give a formal mathematical definition to the general notions embodied in the idea of an attractor or repellor. In the case of hyperbolic dynamics, the corresponding notion is that of the hyperbolic set. Physical example The gravitational tidal forces acting on the rings of Saturn provide an easy-to-visualize physical example. The tidal forces flatten the ring into the equatorial plane, even as they stretch it out in the radial direction. Imagining the rings to be sand or gravel particles ("dust") in orbit around Saturn, the tidal forces are such that any perturbations that push particles above or below the equatorial plane results in that particle feeling a restoring force, pushing it back into the plane. Particles effectively oscillate in a harmonic well, damped by collisions. The stable direction is perpendicular to the ring. The unstable direction is along any radius, where forces stretch and pull particles apart. Two particles that start very near each other in phase space will experience radial forces causing them to diverge, radially. These forces have a positive Lyapunov exponent; the trajectories lie on a hyperbolic manifold, and the movement of particles is essentially chaotic, wandering through the rings. The center manifold is tangential to the rings, with particles experiencing neither compression nor stretching. This allows second-order gravitational forces to dominate, and so particles can be entrained by moons or moonlets in the rings, phase locking to them. The gravitational forces of the moons effectively provide a regularly repeating small kick, each time around the orbit, akin to a kicked rotor, such as found in a phase-locked loop. The discrete-time motion of particles in the ring can be approximated by the Poincaré map. The map effectively provides the transfer matrix of the system. The eigenvector associate
https://en.wikipedia.org/wiki/Cheating%20%28biology%29
Cheating is a term used in behavioral ecology and ethology to describe behavior whereby organisms receive a benefit at the cost of other organisms. Cheating is common in many mutualistic and altruistic relationships. A cheater is an individual who does not cooperate (or cooperates less than their fair share) but can potentially gain the benefit from others cooperating. Cheaters are also those who selfishly use common resources to maximize their individual fitness at the expense of a group. Natural selection favors cheating, but there are mechanisms to regulate it. The stress gradient hypothesis states that facilitation, cooperation or mutualism should be more common in stressful environments, while cheating, competition or parasitism are common in benign environments (i.e nutrient excess). Theoretical models Organisms communicate and cooperate to perform a wide range of behaviors. Mutualism, or mutually beneficial interactions between species, is common in ecological systems. These interactions can be thought of "biological markets" in which species offer partners goods that are relatively inexpensive for them to produce and receive goods that are more expensive or even impossible for them to produce. However, these systems provide opportunities for exploitation by individuals that can obtain resources while providing nothing in return. Exploiters can take on several forms: individuals outside a mutualistic relationship who obtain a commodity in a way that confers no benefit to either mutualist, individuals who receive benefits from a partner but have lost the ability to give any in return, or individuals who have the option of behaving mutualistically towards their partners but chose not to do so. Cheaters, who do not cooperate but benefit from others who do cooperate gain a competitive edge. In an evolutionary context, this competitive edge refers to a greater ability to survive or to reproduce. If individuals who cheat are able to gain survivorship and reprod
https://en.wikipedia.org/wiki/Weitzenb%C3%B6ck%20identity
In mathematics, in particular in differential geometry, mathematical physics, and representation theory a Weitzenböck identity, named after Roland Weitzenböck, expresses a relationship between two second-order elliptic operators on a manifold with the same principal symbol. Usually Weitzenböck formulae are implemented for G-invariant self-adjoint operators between vector bundles associated to some principal G-bundle, although the precise conditions under which such a formula exists are difficult to formulate. This article focuses on three examples of Weitzenböck identities: from Riemannian geometry, spin geometry, and complex analysis. Riemannian geometry In Riemannian geometry there are two notions of the Laplacian on differential forms over an oriented compact Riemannian manifold M. The first definition uses the divergence operator δ defined as the formal adjoint of the de Rham operator d: where α is any p-form and β is any ()-form, and is the metric induced on the bundle of ()-forms. The usual form Laplacian is then given by On the other hand, the Levi-Civita connection supplies a differential operator where ΩpM is the bundle of p-forms. The Bochner Laplacian is given by where is the adjoint of . This is also known as the connection or rough Laplacian. The Weitzenböck formula then asserts that where A is a linear operator of order zero involving only the curvature. The precise form of A is given, up to an overall sign depending on curvature conventions, by where R is the Riemann curvature tensor, Ric is the Ricci tensor, is the map that takes the wedge product of a 1-form and p-form and gives a (p+1)-form, is the universal derivation inverse to θ on 1-forms. Spin geometry If M is an oriented spin manifold with Dirac operator ð, then one may form the spin Laplacian Δ = ð2 on the spin bundle. On the other hand, the Levi-Civita connection extends to the spin bundle to yield a differential operator As in the case of Riemannian manifolds, let .
https://en.wikipedia.org/wiki/Windows%20NT%20processor%20scheduling
Windows NT processor scheduling refers to the process by which Windows NT determines which job (task) should be run on the computer processor at which time. Without scheduling, the processor would give attention to jobs based on when they arrived in the queue, which is usually not optimal. As part of the scheduling, the processor gives a priority level to different processes running on the machine. When two processes are requesting service at the same time, the processor performs the jobs for the one with the higher priority. There are six named priority levels: Realtime High Above Normal Normal Below Normal Low These levels have associated numbers with them. Applications start at a base priority level of eight. The system dynamically adjusts the priority level to give all applications access to the processor. Priority levels 0 - 15 are used by dynamic applications. Priority levels 16- 31 are reserved for real-time applications. Affinity In a multiprocessing environment with more than one logical processor (i.e. multiple cores or hyperthreading), more than one task can be running at the same time. However, a process or a thread can be set to run on only a subset of the available logical processors. The Windows Task Manager utility offers a user interface for this at the process level. References Windows NT kernel Processor scheduling algorithms
https://en.wikipedia.org/wiki/WMBQ-CD
WMBQ-CD (channel 46) is a class A television station in New York City, affiliated with First Nations Experience. Owned by The WNET Group, it is sister to the city's two PBS member stations, Newark, New Jersey–licensed WNET (channel 13) and Garden City, New York–licensed WLIW (channel 21), as well as WNDT-CD (channel 14). Under a channel sharing arrangement, WMBQ-CD shares transmitter facilities with WLIW at One World Trade Center. Despite WMBQ-CD legally holding a low-power class A license, it transmits using WLIW's full-power spectrum. This ensures complete reception across the New York City television market. History As W22BM and WLBX-LP A construction permit for UHF channel 22 in Cranford, New Jersey, was granted to Craig Fox with alpha-numeric call-sign W22BM on February 11, 1993; it signed on in March 1997. The call-sign was changed to WLBX-LP on April 24, 1998. WLBX-LP was previously an affiliate of The Box until that network's acquisition by Viacom in 2001; the station then carried MTV2 like many other former Box stations. On September 11, 2001, WLBX-LP aired footage from CNN and TechTV. As WMBQ The call sign was changed to WMBQ-CA in 2004. In 2006, Renard Communications Corp. (the Craig Fox-controlled company that by then held the license) began transitioning to a new studio and transmitter they were constructing in Manhattan. Due to this change, WMBQ-CA was displaced from channel 22 to channel 46, and the city of license was changed from Cranford to New York City. On August 17, 2007, Renard Communications Corp. announced that would sell its three stations to Equity Media Holdings for $8 million. However, the transaction had a closing deadline set for June 1, 2008, and either party could cancel the sale if it were not completed by then. The sale had not been consummated by June 19 of that year, as the company was making budget cuts elsewhere; and later that year, Equity Media Holdings entered Chapter 11 bankruptcy. On January 3, 2008, WMBQ-CA went dark,
https://en.wikipedia.org/wiki/Mycangium
The term mycangium (pl., mycangia) is used in biology for special structures on the body of an animal that are adapted for the transport of symbiotic fungi (usually in spore form). This is seen in many xylophagous insects (e.g. horntails and bark beetles), which apparently derive much of their nutrition from the digestion of various fungi that are growing amidst the wood fibers. In some cases, as in ambrosia beetles (Coleoptera: Curculionidae: Scolytinae and Platypodinae), the fungi are the sole food, and the excavations in the wood are simply to make a suitable microenvironment for the fungus to grow. In other cases (e.g., the southern pine beetle, Dendroctonus frontalis), wood tissue is the main food, and fungi weaken the defense response from the host plant. Some species of phoretic mites that ride on the beetles, have their own type of mycangium, but for historical reasons, mite taxonomists use the term acarinarium. Apart from riding on the beetles, the mites live together with them in their burrows in the wood. Origin These structures were first systematically described by Helene Francke-Grosmann at 1956. Then Lekh R. Batra coined the word mycangia: modern Latin, from Greek myco 'fungus' + angeion 'vessel'. Function The most common function of mycangia is preserving and releasing symbiotic inoculum. Usually, the symbiotic inoculum in mycangia will benefit their vectors (typically insect or mites), helping them to adapt to the new environment or provide nutrients of the vectors themselves and their descendants. For example, the ambrosia beetle (Euwallacea fornicatus) carries the symbiotic fungus Fusarium. When the beetle bores a host plant, it releases the symbiotic fungus from its mycangium. The symbiotic fungus becomes a plant pathogen, acting to weaken the resistance of host plant. In the meantime, the fungus grows quickly in the galleries as the main food of beetle. After reproduction, maturing beetles will fill their mycangia with symbiont before huntin
https://en.wikipedia.org/wiki/Axiom%20A
In mathematics, Smale's axiom A defines a class of dynamical systems which have been extensively studied and whose dynamics is relatively well understood. A prominent example is the Smale horseshoe map. The term "axiom A" originates with Stephen Smale. The importance of such systems is demonstrated by the chaotic hypothesis, which states that, 'for all practical purposes', a many-body thermostatted system is approximated by an Anosov system. Definition Let M be a smooth manifold with a diffeomorphism f: M→M. Then f is an axiom A diffeomorphism if the following two conditions hold: The nonwandering set of f, Ω(f), is a hyperbolic set and compact. The set of periodic points of f is dense in Ω(f). For surfaces, hyperbolicity of the nonwandering set implies the density of periodic points, but this is no longer true in higher dimensions. Nonetheless, axiom A diffeomorphisms are sometimes called hyperbolic diffeomorphisms, because the portion of M where the interesting dynamics occurs, namely, Ω(f), exhibits hyperbolic behavior. Axiom A diffeomorphisms generalize Morse–Smale systems, which satisfy further restrictions (finitely many periodic points and transversality of stable and unstable submanifolds). Smale horseshoe map is an axiom A diffeomorphism with infinitely many periodic points and positive topological entropy. Properties Any Anosov diffeomorphism satisfies axiom A. In this case, the whole manifold M is hyperbolic (although it is an open question whether the non-wandering set Ω(f) constitutes the whole M). Rufus Bowen showed that the non-wandering set Ω(f) of any axiom A diffeomorphism supports a Markov partition. Thus the restriction of f to a certain generic subset of Ω(f) is conjugated to a shift of finite type. The density of the periodic points in the non-wandering set implies its local maximality: there exists an open neighborhood U of Ω(f) such that Omega stability An important property of Axiom A systems is their structural stability aga
https://en.wikipedia.org/wiki/%CE%92-Pinene
β-Pinene is a monoterpene, an organic compound found in plants. It is one of the two isomers of pinene, the other being α-pinene. It is colorless liquid soluble in alcohol, but not water. It has a woody-green pine-like smell. This is one of the most abundant compounds released by forest trees. If oxidized in air, the allylic products of the pinocarveol and myrtenol family prevail. Sources Many plants from many botanical families contain the compound, including: Cuminum cyminum Humulus lupulus Pinus pinaster Clausena anisata Cannabis sativa Uses References Flavors Perfume ingredients Vinylidene compounds Monoterpenes Bicyclic compounds Cyclobutanes
https://en.wikipedia.org/wiki/Inhabited%20set
In mathematics, a set is inhabited if there exists an element . In classical mathematics, the property of being inhabited is equivalent to being non-empty. However, this equivalence is not valid in constructive or intuitionistic logic, and so this separate terminology is mostly used in the set theory of constructive mathematics. Definition In the formal language of first-order logic, set has the property of being if Related definitions A set has the property of being if , or equivalently . Here stands for the negation . A set is if it is not empty, that is, if , or equivalently . Theorems Modus ponens implies , and taking any a false proposition for establishes that is always valid. Hence, any inhabited set is provably also non-empty. Discussion In constructive mathematics, the double-negation elimination principle is not automatically valid. In particular, an existence statement is generally stronger than its double-negated form. The latter merely expresses that the existence cannot be ruled out, in the strong sense that it cannot consistently be negated. In a constructive reading, in order for to hold for some formula , it is necessary for a specific value of satisfying to be constructed or known. Likewise, the negation of a universal quantified statement is in general weaker than an existential quantification of a negated statement. In turn, a set may be proven to be non-empty without one being able to prove it is inhabited. Examples Sets such as or are inhabited, as e.g. witnessed by . The set is empty and thus not inhabited. Naturally, the example section thus focuses on non-empty sets that are not provably inhabited. It is easy to give examples for any simple set theoretical property, because logical statements can always be expressed as set theoretical ones, using an axiom of separation. For example, with a subset defined as , the proposition may always equivalently be stated as . The double-negated existence claim of an entity wit
https://en.wikipedia.org/wiki/Distributed%20design%20patterns
In software engineering, a distributed design pattern is a design pattern focused on distributed computing problems. Classification Distributed design patterns can be divided into several groups: Distributed communication patterns Security and reliability patterns Event driven patterns Examples MapReduce Bulk synchronous parallel Remote Session See also Software engineering List of software engineering topics References Software design patterns Distributed computing architecture
https://en.wikipedia.org/wiki/Central%20Bureau
The Central Bureau was one of two Allied signals intelligence (SIGINT) organisations in the South West Pacific area (SWPA) during World War II. Central Bureau was attached to the headquarters of the Supreme Commander, Southwest Pacific Area, General Douglas MacArthur. The role of the Bureau was to research and decrypt intercepted Imperial Japanese Army (land and air) traffic and work in close co-operation with other SIGINT centers in the United States, United Kingdom and India. Air activities included both army and navy air forces, as there was no independent Japanese air force. The other unit was the joint Royal Australian Navy/United States Navy Fleet Radio Unit, Melbourne (FRUMEL), which reported directly to CINCPAC (Admiral Chester Nimitz) in Hawaii and the Chief of Naval Operations (Admiral Ernest King) in Washington, D.C. Central Bureau is the precursor to the Defense Signals Bureau, which after a number of name changes is (from 2013) called the Australian Signals Directorate. Structure Central Bureau comprised: administrative personnel supply personnel cryptographic personnel cryptanalytic personnel interpreters translators a field section which included the intercept and communications personnel History Origins Beginning in January 1942, U.S. Navy stations in Hawaii (Station HYPO), Cavite/Corregidor (Station CAST) and OP-20-G (Station NEGAT, at Washington) began issuing formal intelligence decrypts far in advance of the U.S. Army or Central Bureau. General MacArthur had his own signals intelligence unit, Station 6, while he was in command in the Philippines before the start of the war, and was not fully dependent on the U.S. Navy for that type of information. However, most of the signals intelligence he received was from the U.S. Navy's Station CAST, originally at Cavite in the Manila Navy Yard, and evacuated to Corregidor Island after Japanese successes. Prior to the war, it had to be sent by courier, which caused some delay and annoyance. Ge
https://en.wikipedia.org/wiki/Texas%20Instruments%20DaVinci
The Texas Instruments DaVinci is a family of system on a chip processors that are primarily used in embedded video and vision applications. Many processors in the family combine a DSP core based on the TMS320 C6000 VLIW DSP family and an ARM CPU core into a single system on chip. By using both a general-purpose processor and a DSP, the control and media portions can both be executed by separate processors. Later chips in the family included DSP-only and ARM-only processors. All the later chips integrate several accelerators to offload commodity application specific processing from the processor cores to dedicated accelerators. Most notable among these are HDVICP, an H.264, SVC and MPEG-4 compression and decompression engine, ISP, an accelerator engine with methods for enhancing video primarily input from camera sensors and an OSD engine for display acceleration. Some of the newest processors also integrate a vision coprocessor in the SoC. History DaVinci processors were introduced at a time when embedded processors with homogeneous processor cores were widely used. These processors were based either on cores that could do signal processing optimally, like DSPs or GPUs or based on cores that could do general purpose processing optimally, like, powerPC, ARM, and StrongARM. By using both a general-purpose processor and a DSP on a single chip, the control and media portions can both be executed by processors that excel at their respective tasks. By providing a bundled offering with system and application software, evaluation modules and debug tools based on Code Composer Studio, TI DaVinci processors were intended to win over a broader set of customers looking to add video feature to their electronic products. TI announced its first DaVinci branded video processors, the DM6443 and DM6446, on 5 December 2005. A year later, TI followed up with DSP only versions of the chips in the family, called DM643x (DM6431, DM6433, DM6435, DM6437). On January 15, 2007, TI announc
https://en.wikipedia.org/wiki/SGPIO
Serial general purpose input/output (SGPIO) is a four-signal (or four-wire) bus used between a host bus adapter (HBA) and a backplane. Of the four signals, three are driven by the HBA and one by the backplane. Typically, the HBA is a storage controller located inside a server, desktop, rack or workstation computer that interfaces with hard disk drives or solid state drives to store and retrieve data. It is considered an extension of the general-purpose input/output (GPIO) concept. The SGPIO specification is maintained by the Small Form Factor Committee in the SFF-8485 standard. The International Blinking Pattern Interpretation indicates how SGPIO signals are interpreted into blinking light-emitting diodes (LEDs) on disk arrays and storage back-planes. History SGPIO was developed as an engineering collaboration between American Megatrends Inc, at the time makers of back-planes, and LSI-Logic in 2004. SGPIO was later published by the SFF committee as specification SFF-8485. Host bus adapters The SGPIO signal consists of 4 electrical signals; it typically originates from a host bus adapter (HBA). iPass connectors (Usually SFF-8087 or SFF-8484) carry both SAS/SATA electrical connections between the HBA and the hard drives as well as the 4 SGPIO signals. Backplanes with SGPIO bus interface A backplane is a circuit board with connectors and power circuitry into which hard drives are attached; they can have multiple slots, each of which can be populated with a hard drive. Typically the back-plane is equipped with LEDs which by their color and activity, indicate the slot's status; typically, a slot's LED will emit a particular color or blink pattern to indicate its current status. SGPIO interpretation and LED blinking patterns Although many hardware vendors define their own proprietary LED blinking pattern, the common standard for SGPIO interpretation and LED blinking pattern can be found in the IBPI specification. On back-planes, vendors use typically 2 or 3 LED
https://en.wikipedia.org/wiki/Scale%20space%20implementation
In the areas of computer vision, image analysis and signal processing, the notion of scale-space representation is used for processing measurement data at multiple scales, and specifically enhance or suppress image features over different ranges of scale (see the article on scale space). A special type of scale-space representation is provided by the Gaussian scale space, where the image data in N dimensions is subjected to smoothing by Gaussian convolution. Most of the theory for Gaussian scale space deals with continuous images, whereas one when implementing this theory will have to face the fact that most measurement data are discrete. Hence, the theoretical problem arises concerning how to discretize the continuous theory while either preserving or well approximating the desirable theoretical properties that lead to the choice of the Gaussian kernel (see the article on scale-space axioms). This article describes basic approaches for this that have been developed in the literature. Statement of the problem The Gaussian scale-space representation of an N-dimensional continuous signal, is obtained by convolving fC with an N-dimensional Gaussian kernel: In other words: However, for implementation, this definition is impractical, since it is continuous. When applying the scale space concept to a discrete signal fD, different approaches can be taken. This article is a brief summary of some of the most frequently used methods. Separability Using the separability property of the Gaussian kernel the N-dimensional convolution operation can be decomposed into a set of separable smoothing steps with a one-dimensional Gaussian kernel G along each dimension where and the standard deviation of the Gaussian σ is related to the scale parameter t according to t = σ2. Separability will be assumed in all that follows, even when the kernel is not exactly Gaussian, since separation of the dimensions is the most practical way to implement multidimensional smoothing, especia
https://en.wikipedia.org/wiki/Polymelia
Polymelia is a birth defect in which an affected individual has more than the usual number of limbs. It is a type of dysmelia. In humans and most land-dwelling vertebrates, this means having five or more limbs. The extra limb is most commonly shrunken and/or deformed. The term is from Greek πολυ- "many", μέλεα "limbs". Sometimes an embryo started as conjoined twins, but one twin degenerated completely except for one or more limbs, which end up attached to the other twin. Sometimes small extra legs between the normal legs are caused by the body axis forking in the dipygus condition. Notomelia (from Greek for "back-limb-condition") is polymelia where the extra limb is rooted along or near the midline of the back. Notomelia has been reported in Angus cattle often enough to be of concern to farmers. Cephalomelia (from Greek for "head-limb-condition") is polymelia where the extra limb is rooted on the head. Origin Tetrapod legs evolved in the Devonian or Carboniferous geological period from the pectoral fins and pelvic fins of their crossopterygian fish ancestors. Fish fins develop along a "fin line", which runs from the back of the head along the midline of the back, round the end of the tail, and forwards along the underside of the tail, and at the cloaca splits into left and right fin lines which run forwards to the gills. In the paired ventral part of the fin line, normally only the pectoral and pelvic fins survive (but the Devonian acanthodian fish Mesacanthus developed a third pair of paired fins); but along the non-paired parts of the fin line, other fins develop. In tetrapods, only the four paired fins normally persisted, and became the four legs. Notomelia and cephalomelia are atavistic reappearances of dorsal fins. Some other cases of polymelia are extra development along the paired part of the fin lines, or along the ventral posterior non-paired part of the fin line. Notable cases Humans 1995: Somali baby girl born with three left arms. In March 2006
https://en.wikipedia.org/wiki/International%20Association%20for%20Hydro-Environment%20Engineering%20and%20Research
The International Association for Hydro-Environment Engineering and Research (IAHR), founded in 1935, is a worldwide, non-profit, independent organisation of engineers and water specialists working in fields related to the hydro-environment and in particular with reference to hydraulics and its practical application. IAHR was called the International Association of Hydraulic Engineering and Research until 2009. Activities range from river and maritime hydraulics to water resources development, flood risk management and eco-hydraulics, through to ice engineering, hydroinformatics and continuing education and training. IAHR stimulates and promotes both research and its application, and by so doing strives to contribute to sustainable development, the optimisation of world water resources management and industrial flow processes. IAHR accomplishes its goals by a wide variety of member activities including: working groups, research agenda, congresses, specialty conferences, workshops and short courses; Journals, Monographs and Proceedings; by collaborating with international organisations such as UN Water, UNESCO, WMO, IDNDR, GWP, ICSU; and by co-operation with other water-related national and international organisations. IAHR publishes several international scientific journals in collaboration with Taylor & Francis and Elsevier – the Journal of Hydraulic Research, the Journal of River Basin Management, the Journal of Water Engineering and Research, the Revista Iberoamericana del Agua RIBAGUA jointly with the World Council of Civil Engineers (WCCE), the Journal of Ecohydraulics and theJournal of Hydro-Environment Engineering and Research with the Korean Water Resources Association. It also publishes Hydrolink, a quarterly magazine now FREE ACCESS. The activities of IAHR are carried out by two full-time professional secretariats with offices in Madrid, Spain, which is hosted by the consortium Spain Water (CEDEX, Direccion General del Agua, Direccion General de Costas,
https://en.wikipedia.org/wiki/Rapid%20modes%20of%20evolution
Rapid modes of evolution have been proposed by several notable biologists after Charles Darwin proposed his theory of evolutionary descent by natural selection. In his book On the Origin of Species (1859), Darwin stressed the gradual nature of descent, writing: It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were. (1859) Evolutionary developmental biology Work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie such abrupt morphological transitions. Consequently, consideration of mechanisms of phylogenetic change that are actually (not just apparently) non-gradual is increasingly common in the field of evolutionary developmental biology, particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume Origination of Organismal Form. See also Evolution Evolutionary developmental biology Otto Schindewolf Punctuated equilibrium Quantum evolution Richard Goldschmidt Saltationism Bibliography Darwin, C. (1859) On the Origin of Species London: Murray. Goldschmidt, R. (1940) The Material Basis of Evolution. New Haven, Conn.: Yale University Press. Gould, S. J. (1977) "The Return of Hopeful Monsters" Natural History 86 (June/July): 22-30. Gould, S. J. (2002) The Structure of Evolutionary Theory. Cambridge MA: Harvard Univ. Press. Müller, G. B. and Newman,
https://en.wikipedia.org/wiki/Immunochemistry
Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays. In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization. Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry. One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins. Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry). References Branches of immunology
https://en.wikipedia.org/wiki/Differentiation%20in%20Fr%C3%A9chet%20spaces
In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry. Mathematical details Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let and be Fréchet spaces, be an open set, and be a function. The directional derivative of in the direction is defined by if the limit exists. One says that is continuously differentiable, or if the limit exists for all and the mapping is a continuous map. Higher order derivatives are defined inductively via A function is said to be if It is or smooth if it is for every Properties Let and be Fréchet spaces. Suppose that is an open subset of is an open subset of and are a pair of functions. Then the following properties hold: Fundamental theorem of calculus. If the line segment from to lies entirely within then The chain rule. For all and Linearity. is linear in More generally, if is then is multilinear in the 's. Taylor's theorem with remainder. Suppose that the line segment between and lies entirely within If is then where the remainder term is given by Commutativity of directional derivatives. If is then for every permutation σ of The proofs of many of these properties rely fundamentally on the fact that it is possible to de
https://en.wikipedia.org/wiki/Chain%20loading
Chain loading is a method used by computer programs to replace the currently executing program with a new program, using a common data area to pass information from the current program to the new program. It occurs in several areas of computing. Chain loading is similar to the use of overlays. Unlike overlays, however, chain loading replaces the currently executing program in its entirety. Overlays usually replace only a portion of the running program. Like the use of overlays, the use of chain loading increases the I/O load of an application. Chain loading in boot manager programs In operating system boot manager programs, chain loading is used to pass control from the boot manager to a boot sector. The target boot sector is loaded in from disk, replacing the in-memory boot sector from which the boot manager itself was bootstrapped, and executed. Chain loading in Unix In Unix (and in Unix-like operating systems), the exec() system call is used to perform chain loading. The program image of the current process is replaced with an entirely new image, and the current thread begins execution of that image. The common data area comprises the process' environment variables, which are preserved across the system call. Chain loading in Linux In addition to the process level chain loading Linux supports the system call to replace the entire operating system kernel with a different version. The new kernel boots as if it were started from power up and no running processes are preserved. Chain loading in BASIC programs In BASIC programs, chain loading is the purview of the CHAIN statement (or, in Commodore BASIC, the LOAD statement), which causes the current program to be terminated and the chained-to program to be loaded and invoked (with, on those dialects of BASIC that support it, an optional parameter specifying the line number from which execution is to commence, rather than the default of the first line of the new program). The common data area varies
https://en.wikipedia.org/wiki/Core%20common%20area
The core common area is that . Named after magnetic core memory, the term has persisted into the modern era and is commonly used by both the Fortran and BASIC languages. See also Chain loading Operating system technology
https://en.wikipedia.org/wiki/Scale-space%20axioms
In image processing and computer vision, a scale space framework can be used to represent an image as a family of gradually smoothed images. This framework is very general and a variety of scale space representations exist. A typical approach for choosing a particular type of scale space representation is to establish a set of scale-space axioms, describing basic properties of the desired scale-space representation and often chosen so as to make the representation useful in practical applications. Once established, the axioms narrow the possible scale-space representations to a smaller class, typically with only a few free parameters. A set of standard scale space axioms, discussed below, leads to the linear Gaussian scale-space, which is the most common type of scale space used in image processing and computer vision. Scale space axioms for the linear scale-space representation The linear scale space representation of signal obtained by smoothing with the Gaussian kernel satisfies a number of properties 'scale-space axioms' that make it a special form of multi-scale representation: linearity where and are signals while and are constants, shift invariance where denotes the shift (translation) operator semi-group structure with the associated cascade smoothing property existence of an infinitesimal generator non-creation of local extrema (zero-crossings) in one dimension, non-enhancement of local extrema in any number of dimensions at spatial maxima and at spatial minima, rotational symmetry for some function , scale invariance for some functions and where denotes the Fourier transform of , positivity , normalization . In fact, it can be shown that the Gaussian kernel is a unique choice given several different combinations of subsets of these scale-space axioms: most of the axioms (linearity, shift-invariance, semigroup) correspond to scaling being a semigroup of shift-invariant linear operator, which is satisfied by a number of families
https://en.wikipedia.org/wiki/Indole%20test
The indole test is a biochemical test performed on bacterial species to determine the ability of the organism to convert tryptophan into indole. This division is performed by a chain of a number of different intracellular enzymes, a system generally referred to as "tryptophanase." Biochemistry Indole is generated by reductive deamination from tryptophan via the intermediate molecule indolepyruvic acid. Tryptophanase catalyzes the deamination reaction, during which the amine (-NH2) group of the tryptophan molecule is removed. Final products of the reaction are indole, pyruvic acid, ammonium (NH4+) and energy. Pyridoxal phosphate is required as a coenzyme. Performing a test Like many biochemical tests on bacteria, results of an indole test are indicated by a change in color following a reaction with an added reagent. Pure bacterial culture must be grown in sterile tryptophan or peptone broth for 24–48 hours before performing the test. Following incubation, five drops of Kovac's reagent (isoamyl alcohol, para-Dimethylaminobenzaldehyde, concentrated hydrochloric acid) are added to the culture broth. A positive result is shown by the presence of a red or reddish-violet color in the surface alcohol layer of the broth. A negative result appears yellow. A variable result can also occur, showing an orange color as a result. This is due to the presence of skatole, also known as methyl indole or methylated indole, another possible product of tryptophan degradation. The positive red color forms as a result of a series of reactions. The para-Dimethylaminobenzaldehyde reacts with indole present in the medium to form a red rosindole dye. The isoamyl alcohol forms a complex with rosindole dye, which causes it to precipitate. The remaining alcohol and the precipitate then rise to the surface of the medium. A variation on this test using Ehrlich's reagent (using ethyl alcohol in place of isoamyl alcohol, developed by Paul Ehrlich) is used when performing the test on nonferment
https://en.wikipedia.org/wiki/Refactorable%20number
A refactorable number or tau number is an integer n that is divisible by the count of its divisors, or to put it algebraically, n is such that . The first few refactorable numbers are listed in as 1, 2, 8, 9, 12, 18, 24, 36, 40, 56, 60, 72, 80, 84, 88, 96, 104, 108, 128, 132, 136, 152, 156, 180, 184, 204, 225, 228, 232, 240, 248, 252, 276, 288, 296, ... For example, 18 has 6 divisors (1 and 18, 2 and 9, 3 and 6) and is divisible by 6. There are infinitely many refactorable numbers. Properties Cooper and Kennedy proved that refactorable numbers have natural density zero. Zelinsky proved that no three consecutive integers can all be refactorable. Colton proved that no refactorable number is perfect. The equation has solutions only if is a refactorable number, where is the greatest common divisor function. Let be the number of refactorable numbers which are at most . The problem of determining an asymptotic for is open. Spiro has proven that There are still unsolved problems regarding refactorable numbers. Colton asked if there are there arbitrarily large such that both and are refactorable. Zelinsky wondered if there exists a refactorable number , does there necessarily exist such that is refactorable and . History First defined by Curtis Cooper and Robert E. Kennedy where they showed that the tau numbers have natural density zero, they were later rediscovered by Simon Colton using a computer program he had made which invents and judges definitions from a variety of areas of mathematics such as number theory and graph theory. Colton called such numbers "refactorable". While computer programs had discovered proofs before, this discovery was one of the first times that a computer program had discovered a new or previously obscure idea. Colton proved many results about refactorable numbers, showing that there were infinitely many and proving a variety of congruence restrictions on their distribution. Colton was only later alerted that Kennedy and Cooper
https://en.wikipedia.org/wiki/Insular%20dwarfism
Insular dwarfism, a form of phyletic dwarfism, is the process and condition of large animals evolving or having a reduced body size when their population's range is limited to a small environment, primarily islands. This natural process is distinct from the intentional creation of dwarf breeds, called dwarfing. This process has occurred many times throughout evolutionary history, with examples including dinosaurs, like Europasaurus and Magyarosaurus dacus, and modern animals such as elephants and their relatives. This process, and other "island genetics" artifacts, can occur not only on islands, but also in other situations where an ecosystem is isolated from external resources and breeding. This can include caves, desert oases, isolated valleys and isolated mountains ("sky islands"). Insular dwarfism is one aspect of the more general "island effect" or "Foster's rule", which posits that when mainland animals colonize islands, small species tend to evolve larger bodies (island gigantism), and large species tend to evolve smaller bodies. This is itself one aspect of island syndrome, which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts. Possible causes There are several proposed explanations for the mechanism which produces such dwarfism. One is a selective process where only smaller animals trapped on the island survive, as food periodically declines to a borderline level. The smaller animals need fewer resources and smaller territories, and so are more likely to get past the break-point where population decline allows food sources to replenish enough for the survivors to flourish. Smaller size is also advantageous from a reproductive standpoint, as it entails shorter gestation periods and generation times. In the tropics, small size should make thermoregulation easier. Among herbivores, large size confers advantages in coping with both competitors and predators, so a reduc
https://en.wikipedia.org/wiki/Unitary%20divisor
In mathematics, a natural number a is a unitary divisor (or Hall divisor) of a number b if a is a divisor of b and if a and are coprime, having no common factor other than 1. Equivalently, a divisor a of b is a unitary divisor if and only if every prime factor of a has the same multiplicity in a as it has in b. The concept of a unitary divisor originates from R. Vaidyanathaswamy (1931), who used the term block divisor. Example 5 is a unitary divisor of 60, because 5 and have only 1 as a common factor. On the contrary, 6 is a divisor but not a unitary divisor of 60, as 6 and have a common factor other than 1, namely 2. Sum of unitary divisors The sum-of-unitary-divisors function is denoted by the lowercase Greek letter sigma thus: σ*(n). The sum of the k-th powers of the unitary divisors is denoted by σ*k(n): If the proper unitary divisors of a given number add up to that number, then that number is called a unitary perfect number. Properties Number 1 is a unitary divisor of every natural number. The number of unitary divisors of a number n is 2k, where k is the number of distinct prime factors of n. This is because each integer N > 1 is the product of positive powers prp of distinct prime numbers p. Thus every unitary divisor of N is the product, over a given subset S of the prime divisors {p} of N, of the prime powers prp for p ∈ S. If there are k prime factors, then there are exactly 2k subsets S, and the statement follows. The sum of the unitary divisors of n is odd if n is a power of 2 (including 1), and even otherwise. Both the count and the sum of the unitary divisors of n are multiplicative functions of n that are not completely multiplicative. The Dirichlet generating function is Every divisor of n is unitary if and only if n is square-free. Odd unitary divisors The sum of the k-th powers of the odd unitary divisors is It is also multiplicative, with Dirichlet generating function Bi-unitary divisors A divisor d of n is a bi-unitary div
https://en.wikipedia.org/wiki/Charles%20Ezra%20Greene
Charles Ezra Greene (February 12, 1842 – 1903) was an American civil engineer, born in Cambridge, Massachusetts. He graduated at Harvard in 1862 and at Massachusetts Institute of Technology in 1863, served as quartermaster during the last two years of the Civil War, and was United States assistant engineer from 1870 to 1872, when, for part of a year, he was city engineer of Bangor, Maine. In the same year he became connected with the engineering department of the University of Michigan. In 1895, he became the first dean of the University of Michigan College of Engineering, a position he held until his death. Greene House and Greene Lounge, located within the East Quad dormitory on the Central Campus of the University of Michigan, is named in his honor. He was an associate editor of the Engineering News from 1876 - 1877. His publications include: Graphical Method for the Analysis of Bridge Trusses (1876) Trusses and Arches: Graphics for Engineers, Architects, and Builders (three volumes, 1876–79; third edition, 1903) Notes on Rankine's Civil Engineering (1891) Structural Mechanics (1897; second edition, 1905) References 1842 births 1903 deaths American engineering writers American civil engineers Harvard College alumni Massachusetts Institute of Technology alumni Military personnel from Boston Union Army officers University of Michigan faculty Quartermasters
https://en.wikipedia.org/wiki/Alphabetic%20principle
According to the alphabetic principle, letters and combinations of letters are the symbols used to represent the speech sounds of a language based on systematic and predictable relationships between written letters, symbols, and spoken words. The alphabetic principle is the foundation of any alphabetic writing system (such as the English variety of the Latin alphabet, one of the more common types of writing systems in use today). In the education field, it is known as the alphabetic code. Alphabetic writing systems that use an (in principle) almost perfectly phonemic orthography have a single letter (or digraph or, occasionally, trigraph) for each individual phoneme and a one-to-one correspondence between sounds and the letters that represent them, although predictable allophonic alternation is normally not shown. Such systems are used, for example, in the modern languages Serbo-Croatian (arguably, an example of perfect phonemic orthography), Macedonian, Estonian, Finnish, Italian, Romanian, Spanish, Georgian, Hungarian, Turkish, and Esperanto. The best cases have a straightforward spelling system, enabling a writer to predict the spelling of a word given its pronunciation and similarly enabling a reader to predict the pronunciation of a word given its spelling. Ancient languages with such almost perfectly phonemic writing systems include Avestic, Latin, Vedic, and Sanskrit (Devanāgarī—an abugida; see Vyakarana). On the other hand, French and English have a strong difference between sounds and symbols. The alphabetic principle is closely tied to phonics, as it is the systematic relationship between spoken words and their visual representation (letters). The alphabetic principle does not underlie logographic writing systems like Chinese or syllabic writing systems such as Japanese kana. Korean was formerly written partially with Chinese characters, but is now written in the fully alphabetic Hangul system, in which the letters are not written linearly, but arranged
https://en.wikipedia.org/wiki/Octadecyltrichlorosilane
Octadecyltrichlorosilane (ODTS or n-octadecyltrichlorosilane) is an organosilicon compound with the formula . A colorless liquid, it is used as a silanization agent to prepare hydrophobic stationary phase, for reversed-phase chromatography. It is also evaluated for forming self-assembled monolayers on silicon dioxide substrates. Its structural chemical formula is CH3(CH2)17SiCl3. It is flammable and hydrolyzes readily with release of hydrogen chloride. Dodecyltrichlorosilane, an ODTS analog with shorter alkyl chain, is used for the same purpose. ODTS-PVP films are used in organic-substrate LCD displays. References Chlorosilanes Thin films
https://en.wikipedia.org/wiki/Multi-scale%20approaches
The scale space representation of a signal obtained by Gaussian smoothing satisfies a number of special properties, scale-space axioms, which make it into a special form of multi-scale representation. There are, however, also other types of "multi-scale approaches" in the areas of computer vision, image processing and signal processing, in particular the notion of wavelets. The purpose of this article is to describe a few of these approaches: Scale-space theory for one-dimensional signals For one-dimensional signals, there exists quite a well-developed theory for continuous and discrete kernels that guarantee that new local extrema or zero-crossings cannot be created by a convolution operation. For continuous signals, it holds that all scale-space kernels can be decomposed into the following sets of primitive smoothing kernels: the Gaussian kernel : where , truncated exponential kernels (filters with one real pole in the s-plane): if and 0 otherwise where if and 0 otherwise where , translations, rescalings. For discrete signals, we can, up to trivial translations and rescalings, decompose any discrete scale-space kernel into the following primitive operations: the discrete Gaussian kernel where where are the modified Bessel functions of integer order, generalized binomial kernels corresponding to linear smoothing of the form where where , first-order recursive filters corresponding to linear smoothing of the form where where , the one-sided Poisson kernel for where for where . From this classification, it is apparent that we require a continuous semi-group structure, there are only three classes of scale-space kernels with a continuous scale parameter; the Gaussian kernel which forms the scale-space of continuous signals, the discrete Gaussian kernel which forms the scale-space of discrete signals and the time-causal Poisson kernel that forms a temporal scale-space over discrete time. If we on the other hand sacrifice the continuous se
https://en.wikipedia.org/wiki/Strict%20differentiability
In mathematics, strict differentiability is a modification of the usual notion of differentiability of functions that is particularly suited to p-adic analysis. In short, the definition is made more restrictive by allowing both points used in the difference quotient to "move". Basic definition The simplest setting in which strict differentiability can be considered, is that of a real-valued function defined on an interval I of the real line. The function f:I → R is said strictly differentiable in a point a ∈ I if exists, where is to be considered as limit in , and of course requiring . A strictly differentiable function is obviously differentiable, but the converse is wrong, as can be seen from the counter-example One has however the equivalence of strict differentiability on an interval I, and being of differentiability class (i.e. continuously differentiable). In analogy with the Fréchet derivative, the previous definition can be generalized to the case where R is replaced by a Banach space E (such as ), and requiring existence of a continuous linear map L such that where is defined in a natural way on E × E. Motivation from p-adic analysis In the p-adic setting, the usual definition of the derivative fails to have certain desirable properties. For instance, it is possible for a function that is not locally constant to have zero derivative everywhere. An example of this is furnished by the function F: Zp → Zp, where Zp is the ring of p-adic integers, defined by One checks that the derivative of F, according to usual definition of the derivative, exists and is zero everywhere, including at x = 0. That is, for any x in Zp, Nevertheless F fails to be locally constant at the origin. The problem with this function is that the difference quotients do not approach zero for x and y close to zero. For example, taking x = pn − p2n and y = pn, we have which does not approach zero. The definition of strict differentiability avoids this problem by impo
https://en.wikipedia.org/wiki/Computational%20visualistics
The term Computational visualistics addresses the whole range of scientifically investigating pictures "in" the computer. Overview Images take a rather prominent place in contemporary life in western societies. Together with language, they have been connected to human culture from the very beginning. For about one century – after several millennia of written word's dominance – their part is increasing again remarkably. Steps toward a general science of images, which we may call 'general visualistics' in analogy to general linguistics, have only been taken recently. So far, a unique scientific basis for circumscribing and describing the heterogeneous phenomenon "image" in an interpersonally verifiable manner has still been missing while distinct aspects falling in the domain of visualistics have predominantly been dealt with in several other disciplines, among them in particular philosophy, psychology, and art history. Last (though not least), important contributions to certain aspects of the new science of images have come from computer science. In computer science, too, considering pictures evolved originally along several more or less independent questions, which lead to proper sub-disciplines: computer graphics is certainly the most "visible" among them. Only recently, the effort has been increased to finally form a unique and partially autonomous branch of computer science dedicated to images. In analogy to computational linguistics, the artificial expression computational visualistics is used for addressing the whole range of investigating scientific pictures "in" the computer. Areas covered For a science of images within computer science, the abstract data type "image" (or perhaps several such types) stands in the center of interest together with the potential implementations. There are three main groups of algorithms for that data type to be considered in computational visualistics: Algorithms from "image" to "image" In the field called image processin
https://en.wikipedia.org/wiki/Label%20printer%20applicator
A label printer applicator is a basic robot that can automatically print and apply pressure-sensitive labels to various products. Some types of labeling include shipping labeling, content labeling, graphic images, and labeling to comply with specific standards such as those of GS1 and Universal Product Code U.P.C. A pressure-sensitive label consists of a label substrate and adhesive. First developed in the late 1970s, today there are over 70 manufacturers of these types of machines worldwide. Design Basic label printer applicators consist of three primary parts: a printer, or print engine, an applicator and a method to handle label and ribbons, referred to as media. Computing power also has the potential to increase the efficiency of label printer applicators. Print engine The print engine can be taken from an industrial table top printer, it can be a specifically designed module that can be "bolted" onto an applicator or it can be a proprietary element constructed by the printer applicator manufacturer. A print engine’s primary function is to accept data from a computer and print the data onto a label for application. This printing can be accomplished using either the direct thermal method or the thermal transfer method. Both methods heat up very fine elements (up to 600 per inch) on a print head. Direct thermal burns the image onto the face of specially designed label stock. This is the preferred method for shipping labels and is also very popular in Europe. The thermal transfer process utilizes a ribbon coated with wax, resin, or a hybrid of the two. It is then heated and melted onto the surface of the label substrate. Thermal transfer is the most popular method in the United States. The printer knows what to print via data communication from an outside software package, much like common inkjet printers. The software delivers data formatted in a specific layout and the printer reads the format based on its own driver. Applicator The applicato
https://en.wikipedia.org/wiki/Limit%20set
In mathematics, especially in the study of dynamical systems, a limit set is the state a dynamical system reaches after an infinite amount of time has passed, by either going forward or backwards in time. Limit sets are important because they can be used to understand the long term behavior of a dynamical system. A system that has reached its limiting set is said to be at equilibrium. Types fixed points periodic orbits limit cycles attractors In general, limits sets can be very complicated as in the case of strange attractors, but for 2-dimensional dynamical systems the Poincaré–Bendixson theorem provides a simple characterization of all nonempty, compact -limit sets that contain at most finitely many fixed points as a fixed point, a periodic orbit, or a union of fixed points and homoclinic or heteroclinic orbits connecting those fixed points. Definition for iterated functions Let be a metric space, and let be a continuous function. The -limit set of , denoted by , is the set of cluster points of the forward orbit of the iterated function . Hence, if and only if there is a strictly increasing sequence of natural numbers such that as . Another way to express this is where denotes the closure of set . The points in the limit set are non-wandering (but may not be recurrent points). This may also be formulated as the outer limit (limsup) of a sequence of sets, such that If is a homeomorphism (that is, a bicontinuous bijection), then the -limit set is defined in a similar fashion, but for the backward orbit; i.e. . Both sets are -invariant, and if is compact, they are compact and nonempty. Definition for flows Given a real dynamical system with flow , a point , we call a point y an -limit point of if there exists a sequence in so that . For an orbit of , we say that is an -limit point of , if it is an -limit point of some point on the orbit. Analogously we call an -limit point of if there exists a sequence in so that . For a
https://en.wikipedia.org/wiki/Bar%20induction
Bar induction is a reasoning principle used in intuitionistic mathematics, introduced by L. E. J. Brouwer. Bar induction's main use is the intuitionistic derivation of the fan theorem, a key result used in the derivation of the uniform continuity theorem. It is also useful in giving constructive alternatives to other classical results. The goal of the principle is to prove properties for all infinite sequences of natural numbers (called choice sequences in intuitionistic terminology), by inductively reducing them to properties of finite lists. Bar induction can also be used to prove properties about all choice sequences in a spread (a special kind of set). Definition Given a choice sequence , any finite sequence of elements of this sequence is called an initial segment of this choice sequence. There are three forms of bar induction currently in the literature, each one places certain restrictions on a pair of predicates and the key differences are highlighted using bold font. Decidable bar induction (BID) Given two predicates and on finite sequences of natural numbers such that all of the following conditions hold: every choice sequence contains at least one initial segment satisfying at some point (this is expressed by saying that is a bar); is decidable (i.e. our bar is decidable); every finite sequence satisfying also satisfies (so holds for every choice sequence beginning with the aforementioned finite sequence); if all extensions of a finite sequence by one element satisfy , then that finite sequence also satisfies (this is sometimes referred to as being upward hereditary); then we can conclude that holds for the empty sequence (i.e. A holds for all choice sequences starting with the empty sequence). This principle of bar induction is favoured in the works of, A. S. Troelstra, S. C. Kleene and Albert Dragalin. Thin bar induction (BIT) Given two predicates and on finite sequences of natural numbers such that all of the following co
https://en.wikipedia.org/wiki/Stability%20theory
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance. In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied. Overview in dynamical systems Many parts of the qualitative theory of differential equations and dynamical systems deal with asymptotic properties of solutions and the trajectories—what happens with the system after a long period of time. The simplest kind of behavior is exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular orbit is well understood, it is natural to ask next whether a small change in the initial condition will lead to similar behavior. Stability theory addresses the following questions: Will a nearby orbit indefinitely stay close to a given orbit? Will it converge to the given orbit? In the former case, the orbit is called stable; in the latter case, it is called asymptotically stable and the given orbit is said to be attracting. An equilibrium solution to an autonomous system of first ord
https://en.wikipedia.org/wiki/Stable%20manifold%20theorem
In mathematics, especially in the study of dynamical systems and differential equations, the stable manifold theorem is an important result about the structure of the set of orbits approaching a given hyperbolic fixed point. It roughly states that the existence of a local diffeomorphism near a fixed point implies the existence of a local stable center manifold containing that fixed point. This manifold has dimension equal to the number of eigenvalues of the Jacobian matrix of the fixed point that are less than 1. Stable manifold theorem Let be a smooth map with hyperbolic fixed point at . We denote by the stable set and by the unstable set of . The theorem states that is a smooth manifold and its tangent space has the same dimension as the stable space of the linearization of at . is a smooth manifold and its tangent space has the same dimension as the unstable space of the linearization of at . Accordingly is a stable manifold and is an unstable manifold. See also Center manifold theorem Lyapunov exponent Notes References External links Dynamical systems Theorems in dynamical systems
https://en.wikipedia.org/wiki/280%20%28number%29
280 (two hundred [and] eighty) is the natural number after 279 and before 281. In mathematics The denominator of the eighth harmonic number, 280 is an octagonal number. 280 is the smallest octagonal number that is a half of another octagonal number. There are 280 plane trees with ten nodes. As a consequence of this, 18 people around a round table can shake hands with each other in non-crossing ways, in 280 different ways (this includes rotations). Integers from 281 to 289 281 282 283 284 285 286 287 288 289 References Integers
https://en.wikipedia.org/wiki/290%20%28number%29
290 (two hundred [and] ninety) is the natural number following 289 and preceding 291. In mathematics The product of three primes, 290 is a sphenic number, and the sum of four consecutive primes (67 + 71 + 73 + 79). The sum of the squares of the divisors of 17 is 290. Not only is it a nontotient and a noncototient, it is also an untouchable number. 290 is the 16th member of the Mian–Chowla sequence; it can not be obtained as the sum of any two previous terms in the sequence. See also the Bhargava–Hanke 290 theorem. Integers from 291 to 299 291 292 293 294 295 296 296 = 23·37, a refactorable number, unique period in base 2, the number of regions formed by drawing the line segments connecting any two of the 12 perimeter points of an 2 times 4 grid of squares (illustration) , and the number of surface points on a 83 cube. 297 297 = 33·11, the number of integer partitions of 17, a decagonal number, and a Kaprekar number 298 298 = 2·149, is nontotient, noncototient, and the number of polynomial symmetric functions of matrix of order 6 under separate row and column permutations 299 299 = 13·23, a highly cototient number, a self number, and the twelfth cake number References Integers
https://en.wikipedia.org/wiki/Kauri-butanol%20value
The kauri-butanol value ("Kb value") is an international, standardized measure of solvent power for a hydrocarbon solvent, and is governed by an ASTM standardized test, ASTM D1133. The result of this test is a scaleless index, usually referred to as the "Kb value". A higher Kb value means the solvent is more aggressive or active in the ability to dissolve certain materials. Mild solvents have low scores in the tens and twenties; powerful solvents like chlorinated solvents and naphthenic aromatic solvents (i.e. "High Sol 10", "High Sol 15") have ratings that are in the low hundreds. In terms of the test itself, the kauri-butanol value (Kb) of a chemical shows the maximum amount of the hydrocarbon that can be added to a solution of kauri resin (a thick, gum-like material) in butanol (butyl alcohol) without causing cloudiness. Since kauri resin is readily soluble in butyl alcohol but not in most hydrocarbon solvents, the resin solution will tolerate only a certain amount of dilution. "Stronger" solvents such as benzene can be added in a greater amount (and thus have a higher Kb value) than "weaker" solvents like mineral spirits. References Product certification Units of measurement Kauri gum
https://en.wikipedia.org/wiki/Spin%20Hall%20effect
The spin Hall effect (SHE) is a transport phenomenon predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. It consists of the appearance of spin accumulation on the lateral surfaces of an electric current-carrying sample, the signs of the spin directions being opposite on the opposing boundaries. In a cylindrical wire, the current-induced surface spins will wind around the wire. When the current direction is reversed, the directions of spin orientation is also reversed. Definition The spin Hall effect is a transport phenomenon consisting of the appearance of spin accumulation on the lateral surfaces of a sample carrying electric current. The opposing surface boundaries will have spins of opposite sign. It is analogous to the classical Hall effect, where charges of opposite sign appear on the opposing lateral surfaces in an electric-current carrying sample in a magnetic field. In the case of the classical Hall effect the charge build up at the boundaries is in compensation for the Lorentz force acting on the charge carriers in the sample due to the magnetic field. No magnetic field is needed for the spin Hall effect which is a purely spin-based phenomenon. The spin Hall effect belongs to the same family as the anomalous Hall effect, known for a long time in ferromagnets, which also originates from spin–orbit interaction. History The spin Hall effect (direct and inverse) was predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. They also introduced for the first time the notion of spin current. In 1983 Averkiev and Dyakonov proposed a way to measure the inverse spin Hall effect under optical spin orientation in semiconductors. The first experimental demonstration of the inverse spin Hall effect, based on this idea, was performed by Bakun et al. in 1984 The term "spin Hall effect" was introduced by Hirsch who re-predicted this effect in 1999. Experimentally, the (direct) spin Hall effect was observed i
https://en.wikipedia.org/wiki/Seasonal%20thermal%20energy%20storage
Seasonal thermal energy storage (STES), also known as inter-seasonal thermal energy storage, is the storage of heat or cold for periods of up to several months. The thermal energy can be collected whenever it is available and be used whenever needed, such as in the opposing season. For example, heat from solar collectors or waste heat from air conditioning equipment can be gathered in hot months for space heating use when needed, including during winter months. Waste heat from industrial process can similarly be stored and be used much later or the natural cold of winter air can be stored for summertime air conditioning. STES stores can serve district heating systems, as well as single buildings or complexes. Among seasonal storages used for heating, the design peak annual temperatures generally are in the range of , and the temperature difference occurring in the storage over the course of a year can be several tens of degrees. Some systems use a heat pump to help charge and discharge the storage during part or all of the cycle. For cooling applications, often only circulation pumps are used. Examples for district heating include Drake Landing Solar Community where ground storage provides 97% of yearly consumption without heat pumps, and Danish pond storage with boosting. STES technologies There are several types of STES technology, covering a range of applications from single small buildings to community district heating networks. Generally, efficiency increases and the specific construction cost decreases with size. Underground thermal energy storage UTES (underground thermal energy storage), in which the storage medium may be geological strata ranging from earth or sand to solid bedrock, or aquifers. UTES technologies include: ATES (aquifer thermal energy storage). An ATES store is composed of a doublet, totaling two or more wells into a deep aquifer that is contained between impermeable geological layers above and below. One half of the doublet is for w
https://en.wikipedia.org/wiki/Check%20Point%20Integrity
Check Point Integrity is an endpoint security software product developed by Check Point Software Technologies. It is designed to protect personal computers and the networks they connect to from computer worms, Trojan horses, spyware, and intrusion attempts by hackers. The software aims to stop new PC threats and attacks before signature updates have been installed on the PC. The software includes. network access controls that detect and remedy security policy violations before a PC is allowed to connect to a network; application controls that block or terminate malicious software programs before they can transmit information to an unauthorized party; a personal firewall; an intrusion prevention system (IPS) Check Point Intrusion Prevention System – IPS; spyware detection and removal; and instant messaging security tools. An administrator manages the security policies that apply to groups of users from a central console and server. Check Point acquired the Integrity software as part of its acquisition of endpoint security start-up Zone Labs in 2004. The Integrity software, released in early 2002, was derived from the ZoneAlarm security technology and added central policy management and network access control functions. Integrity was integrated with network gateways (the Cisco VPN 3000 series) to ensure that a PC met security requirements before it was granted access to the network. Demand for endpoint security grew in 2003 after the SQL Slammer and Blaster computer worms reportedly caused extensive damage, despite widespread use of antivirus software on personal computers. A number of destructive worms that followed, and the subsequent rise of spyware as a significant problem, continued to increase demand for endpoint security products. Data privacy and integrity regulations and required security audits mandated by governmental and professional authorities, along with infections and damage caused by guest PC access, have also prompted use of such security sof
https://en.wikipedia.org/wiki/Cell%20software%20development
Software development for the Cell microprocessor involves a mixture of conventional development practices for the PowerPC-compatible PPU core, and novel software development challenges with regard to the functionally reduced SPU coprocessors. Linux on Cell An open source software-based strategy was adopted to accelerate the development of a Cell BE ecosystem and to provide an environment to develop Cell applications, including a GCC-based Cell compiler, binutils and a port of the Linux operating system. Octopiler Octopiler is IBM's prototype compiler to allow software developers to write code for Cell processors. Software portability Adapting VMX for SPU Differences between VMX and SPU The VMX (Vector Multimedia Extensions) technology is conceptually similar to the vector model provided by the SPU processors, but there are many significant differences. The VMX Java mode conforms to the Java Language Specification 1 subset of the default IEEE Standard, extended to include IEEE and C9X compliance where the Java standard falls silent. In a typical implementation, non-Java mode converts denormal values to zero but Java mode traps into an emulator when the processor encounters such a value. The IBM PPE Vector/SIMD manual does not define operations for double-precision floating point, though IBM has published material implying certain double-precision performance numbers associated with the Cell PPE VMX technology. Intrinsics Compilers for Cell provide intrinsics to expose useful SPU instructions in C and C++. Instructions that differ only in the type of operand (such as a, ai, ah, ahi, fa, and dfa for addition) are typically represented by a single C/C++ intrinsic which selects the proper instruction based on the type of the operand. Porting VMX code for SPU There is a great body of code which has been developed for other IBM Power microprocessors that could potentially be adapted and recompiled to run on the SPU. This code base includes VMX code that runs under
https://en.wikipedia.org/wiki/Power%20dividers%20and%20directional%20couplers
Power dividers (also power splitters and, when used in reverse, power combiners) and directional couplers are passive devices used mostly in the field of radio technology. They couple a defined amount of the electromagnetic power in a transmission line to a port enabling the signal to be used in another circuit. An essential feature of directional couplers is that they only couple power flowing in one direction. Power entering the output port is coupled to the isolated port but not to the coupled port. A directional coupler designed to split power equally between two ports is called a hybrid coupler. Directional couplers are most frequently constructed from two coupled transmission lines set close enough together such that energy passing through one is coupled to the other. This technique is favoured at the microwave frequencies where transmission line designs are commonly used to implement many circuit elements. However, lumped component devices are also possible at lower frequencies, such as the audio frequencies encountered in telephony. Also at microwave frequencies, particularly the higher bands, waveguide designs can be used. Many of these waveguide couplers correspond to one of the conducting transmission line designs, but there are also types that are unique to waveguide. Directional couplers and power dividers have many applications. These include providing a signal sample for measurement or monitoring, feedback, combining feeds to and from antennas, antenna beam forming, providing taps for cable distributed systems such as cable TV, and separating transmitted and received signals on telephone lines. Notation and symbols The symbols most often used for directional couplers are shown in figure 1. The symbol may have the coupling factor in dB marked on it. Directional couplers have four ports. Port 1 is the input port where power is applied. Port 3 is the coupled port where a portion of the power applied to port 1 appears. Port 2 is the transm
https://en.wikipedia.org/wiki/Cell%20microprocessor%20implementations
Cell microprocessors are multi-core processors that use cellular architecture for high performance distributed computing. The first commercial Cell microprocessor, the Cell BE, was designed for the Sony PlayStation 3. IBM designed the PowerXCell 8i for use in the Roadrunner supercomputer. Implementation First edition Cell on 90 nm CMOS IBM has published information concerning two different versions of Cell in this process, an early engineering sample designated DD1, and an enhanced version designated DD2 intended for production. The main enhancement in DD2 was a small lengthening of the die to accommodate a larger PPE core, which is reported to "contain more SIMD/vector execution resources". Some preliminary information released by IBM references the DD1 variant. As a result, some early journalistic accounts of the Cell's capabilities now differ from production hardware. Cell floorplan Powerpoint material accompanying an STI presentation given by Dr Peter Hofstee], includes a photograph of the DD2 Cell die overdrawn with functional unit boundaries which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows: SPE floorplan Additional details concerning the internal SPE implementation have been disclosed by IBM engineers, including Peter Hofstee, IBM's chief architect of the synergistic processing element, in a scholarly IEEE publication. This document includes a photograph of the 2.54 mm × 5.81 mm SPE, as implemented in 90-nm SOI. In this technology, the SPE contains 21 million transistors of which 14 million are contained in arrays (a term presumably designating register files and the local store) and 7 million transistors are logic. This photograph is overdrawn with functional unit boundaries, which are also captioned by name, which reveals the breakdown of silicon area by function unit as follows: Understanding the dispatch pipes is important to write efficient code. In the SPU architecture, two instructions c
https://en.wikipedia.org/wiki/Bridge%20management%20system
A bridge management system (BMS) is a set of methodologies and procedures for managing information about bridges. Such system is capable of document and process data along the entire life cycle of the structure steps: project design, construction, monitoring, maintenance and end of operation. First used in literature in 1987, the acronym BMS is commonly used in structural engineering to refer to a single or a combination of digital tools and software that support the documentation of every practice related to the single structure. Such software architecture has to meet the needs of road asset managers interested on tracking the serviceability status of bridges through a workflow mainly based on 4 components: data inventory, cost and construction management, structural analysis and assessment and maintenance planning. The implementation of BMS usually is built on top of relational databases, geographic information systems (GIS) and building information modeling platform (BIM) also named bridge information modeling (BrIM) with photogrammetric and laser scanning processing software used for the management of data collected during targeted inspections. The output of the whole procedure, as stated also in some national guidelines of different countries, usually consists of a prioritization of intervention on bridges classified in different risk level according to information collected and processed. History Since the late 1980s the structural health assessment and monitoring of bridges represented a critical topic in the field of civil infrastructure management. In the 1990s, the Federal Highway Administration (FHWA) of the United States promoted and sponsored PONTIS and BRIDGEIT, two computerized platforms for viaduct inventory and monitoring named BMSs. In the following years, also outside the U.S., the growing need of an organized and digitized road asset management has led responsible national agencies to adopt increasingly complex solutions able to meet their obj
https://en.wikipedia.org/wiki/Soil%20production%20function
Soil production function refers to the rate of bedrock weathering into soil as a function of soil thickness. A general model suggested that the rate of physical weathering of bedrock (de/dt) can be represented as an exponential decline with soil thickness: where h is soil thickness [m], P0 [mm/year] is the potential (or maximum) weathering rate of bedrock and k [m−1] is an empirical constant. The reduction of weathering rate with thickening of soil is related to the exponential decrease of temperature amplitude with increasing depth below the soil surface, and also the exponential decrease in average water penetration (for freely-drained soils). Parameters P0 and k are related to the climate and type of parent materials. found the value of P0 ranges from 0.08 to 2.0 mm/yr for sites in Northern California, and 0.05–0.14 mm/yr for sites in Southeastern Australia. Meanwhile values of k do not vary significantly, ranging from 2 to 4 m−1. A number of landscape evolution models have adopted the so-called humped model. This model dates back to G.K. Gilbert's Report on the Geology of the Henry Mountains (1877). Gilbert reasoned that the weathering of bedrock was fastest under an intermediate thickness of soil and slower under exposed bedrock or under thick mantled soil. This is because chemical weathering requires the presence of water. Under thin soil or exposed bedrock water tends to run off, reducing the chance of the decomposition of bedrock. See also Biorhexistasy Hillslope evolution Parent material Pedogenesis Soil functions Weathering References Bibliography Mathematical modeling Pedology
https://en.wikipedia.org/wiki/Zero-order%20hold
The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication. Time-domain model A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming one sample per time interval T: where is the rectangular function. The function is depicted in Figure 1, and is the piecewise-constant signal depicted in Figure 2. Frequency-domain model The equation above for the output of the ZOH can also be modeled as the output of a linear time-invariant filter with impulse response equal to a rect function, and with input being a sequence of dirac impulses scaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as the Whittaker–Shannon interpolation formula suggested by the Nyquist–Shannon sampling theorem, or such as the first-order hold or linear interpolation between sample values. In this method, a sequence of Dirac impulses, xs(t), representing the discrete samples, x[n], is low-pass filtered to recover a continuous-time signal, x(t). Even though this is not what a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses, xs(t), to a linear, time-invariant filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct constant pulse in the output. Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions: The scaling by , which arises naturally by time-scaling the delta function, has the result that the mean value of xs(t) is equal to the mean v
https://en.wikipedia.org/wiki/Peter%20Samson
Peter R. Samson (born 1941 in Fitchburg, Massachusetts) is an American computer scientist, best known for creating pioneering computer software for the TX-0 and PDP-1. Samson studied at the Massachusetts Institute of Technology (MIT) between 1958-1963. He wrote, with characteristic wit, the first editions of the Tech Model Railroad Club (TMRC) dictionary, a predecessor to the Jargon File. He appears in Hackers: Heroes of the Computer Revolution by Steven Levy. Career The Tech Model Railroad Club As a member of the Tech Model Railroad Club in his student days at MIT, Samson was noted for his contributions to the Signals and Power Subcommittee, the technical side of the club. Steven Levy's Hackers: Heroes of the Computer Revolution outlines Samson's interest in trains and electronics, and his influence in the club. Levy explains how the club was in fact Samson's gateway into hacking and his ability to manipulate electronics and machine code to create programs. Levy explains how Samson discovered his programming passion with the IBM 704, but frustration with the high level of security around the machine. Only those with very high clearance were able to actually handle the computer, with all programs submitted to be processed through the machine by someone else. This meant Samson would not find out the results of his programs until a few days after submitting them. Because of these restrictions to the IBM 704, it was not until Samson was introduced to the TX-0 that he could explore his obsession with computer programming, as members of the Railroad Club were able to access the computer directly without having to go through a superior. Dawn of software Working with Jack Dennis on the TX-0 at MIT Building 26, he developed an interest in computing waveforms to synthesize music. For the PDP-1 he wrote the Harmony Compiler with which PDP-1 users coded music. He wrote the Expensive Planetarium star display for Spacewar!. Also for the PDP-1 he wrote TJ-2 (Type Justify
https://en.wikipedia.org/wiki/Carroll%20diagram
A Carroll diagram, Lewis Carroll's square, biliteral diagram or a two-way table is a diagram used for grouping things in a yes/no fashion. Numbers or objects are either categorised as 'x' (having an attribute x) or 'not x' (not having an attribute 'x'). They are named after Lewis Carroll, the pseudonym of polymath Charles Lutwidge Dodgson. Usage Although Carroll diagrams can be as simple as the first one above, the most well known types are those similar to the second one, where two attributes are shown. The 'universe' of a Carroll diagram is contained within the boxes in the diagram, as any number or object has to either have an attribute or not have it. Carroll diagrams are often learnt by schoolchildren, but they can also be used outside the field of education, since they are a tidy way of categorising and displaying information. See also Diagram Karnaugh map Set theory Venn diagram The Game of Logic References Further reading External links Lewis Carroll: Logic, Internet Encyclopedia of Philosophy Graphical concepts in set theory Diagrams Grouping
https://en.wikipedia.org/wiki/Carry%20flag
In computer processors the carry flag (usually indicated as the C flag) is a single bit in a system status register/flag register used to indicate when an arithmetic carry or borrow has been generated out of the most significant arithmetic logic unit (ALU) bit position. The carry flag enables numbers larger than a single ALU width to be added/subtracted by carrying (adding) a binary digit from a partial addition/subtraction to the least significant bit position of a more significant word. This is typically programmed by the user of the processor on the assembly or machine code level, but can also happen internally in certain processors, via digital logic or microcode, where some processors have wider registers and arithmetic instructions than (combinatorial, or "physical") ALU. It is also used to extend bit shifts and rotates in a similar manner on many processors (sometimes done via a dedicated flag). For subtractive operations, two (opposite) conventions are employed as most machines set the carry flag on borrow while some machines (such as the 6502 and the PIC) instead reset the carry flag on borrow (and vice versa). Uses The carry flag is affected by the result of most arithmetic (and typically several bit wise) instructions and is also used as an input to many of them. Several of these instructions have two forms which either read or ignore the carry. In assembly languages these instructions are represented by mnemonics such as ADD/SUB, ADC/SBC (ADD/SUB including carry), SHL/SHR (bit shifts), ROL/ROR (bit rotates), RCR/RCL (rotate through carry), and so on. The use of the carry flag in this manner enables multi-word add, subtract, shift, and rotate operations. An example is what happens if one were to add 255 and 255 using 8-bit registers. The result should be 510 which is the 9-bit value 111111110 in binary. The 8 least significant bits always stored in the register would be 11111110 binary (254 decimal) but since there is carry out of bit 7 (the eight bit)
https://en.wikipedia.org/wiki/Parity%20flag
In computer processors the parity flag indicates if the numbers of set bits is odd or even in the binary representation of the result of the last operation. It is normally a single bit in a processor status register. For example, assume a machine where a set parity flag indicates even parity. If the result of the last operation were 26 (11010 in binary), the parity flag would be 0 since the number of set bits is odd. Similarly, if the result were 10 (1010 in binary) then the parity flag would be 1. x86 processors In x86 processors, the parity flag reflects the parity only of the least significant byte of the result, and is set if the number of set bits of ones is even (put another way, the parity bit is set if the sum of the bits is even). According to 80386 Intel manual, the parity flag is changed in the x86 processor family by the following instructions: All arithmetic instructions; Compare instruction (equivalent to a subtract instruction without storing the result); Logical instructions - XOR, AND, OR; the TEST instruction (equivalent to the AND instruction without storing the result). the POPF instruction the IRET instruction an instruction or interrupt that causes a hardware task switch In conditional jumps, parity flag is used, where e.g. the JP instruction jumps to the given target when the parity flag is set and the JNP instruction jumps if it is not set. The flag may be also read directly with instructions such as PUSHF, which pushes the flags register on the stack. One common reason to test the parity flag is to check an unrelated x87-FPU flag. The FPU has four condition flags (C0 to C3), but they can not be tested directly, and must instead be first copied to the flags register. When this happens, C0 is placed in the carry flag, C2 in the parity flag and C3 in the zero flag. The C2 flag is set when e.g. incomparable floating point values (NaN or unsupported format) are compared with the FUCOM instructions. References See also x86 archi
https://en.wikipedia.org/wiki/Negative%20flag
In a computer processor the negative flag or sign flag is a single bit in a system status (flag) register used to indicate whether the result of the last mathematical operation produced a value in which the most significant bit (the left most bit) was set. In a two's complement interpretation of the result, the negative flag is set if the result was negative. For example, in an 8-bit signed number system, -37 will be represented as 1101 1011 in binary (the most significant bit, or sign bit, is 1), while +37 will be represented as 0010 0101 (the most significant bit is 0). The negative flag is set according to the result in the x86 series processors by the following instructions (referring to the Intel 80386 manual): All arithmetic operations except multiplication and division; compare instructions (equivalent to subtract instructions without storing the result); Logical instructions – XOR, AND, OR; TEST instructions (equivalent to AND instructions without storing the result). References Computer arithmetic
https://en.wikipedia.org/wiki/Duality%20%28electrical%20circuits%29
In electrical engineering, electrical terms are associated into pairs called duals. A dual of a relationship is formed by interchanging voltage and current in an expression. The dual expression thus produced is of the same form, and the reason that the dual is always a valid statement can be traced to the duality of electricity and magnetism. Here is a partial list of electrical dualities: voltage – current parallel – serial (circuits) resistance – conductance voltage division – current division impedance – admittance capacitance – inductance reactance – susceptance short circuit – open circuit Kirchhoff's current law – Kirchhoff's voltage law. Thévenin's theorem – Norton's theorem History The use of duality in circuit theory is due to Alexander Russell who published his ideas in 1904. Examples Constitutive relations Resistor and conductor (Ohm's law) Capacitor and inductor – differential form Capacitor and inductor – integral form Voltage division — current division Impedance and admittance Resistor and conductor Capacitor and inductor See also Duality (electricity and magnetism) Duality (mechanical engineering) Dual impedance Dual graph Mechanical–electrical analogies List of dualities References Turner, Rufus P, Transistors Theory and Practice, Gernsback Library, Inc, New York, 1954, Chapter 6. Electrical engineering Electrical circuits
https://en.wikipedia.org/wiki/VTun
VTun is a networking application which can set up Virtual Tunnels over TCP/IP networks. It supports Internet Protocol (IP), Point-to-Point Protocol (PPP) and Serial Line Internet Protocol (SLIP) protocols. It exists as the reference implementation of the Tun/Tap user-space tunnel driver which was included in the Linux kernel as of version 2.4, also originally developed by Maxim Krasnyansky. Bishop Clark is the current maintainer. Networking Like most other applications of its nature, VTun creates a single connection between two machines, over which it multiplexes all traffic. VTun connections are initiated via a TCP connection from the client to the server. The server then initiates a UDP connection to the client, if the UDP protocol is requested. The software allows the creation of tunnels, for routing traffic in a manner similar to PPP, as well as a bridge-friendly ethertap connection. Authentication VTun uses a Private Shared Key to negotiate a handshake via a challenge and response. Non-encrypting versions A continual source of concern, and the target of more than one strongly worded security assessment, is that the VTun server and client binary applications can be completely built without encryption support. When such binaries are used, the encryption between both endpoints is only a simple XOR cipher, which is completely trivial to decode. This type of build is not supported by the developers. References External links Internet protocols Free security software
https://en.wikipedia.org/wiki/Number%20Forms
Number Forms is a Unicode block containing Unicode compatibility characters that have specific meaning as numbers, but are constructed from other characters. They consist primarily of vulgar fractions and Roman numerals. In addition to the characters in the Number Forms block, three fractions (¼, ½, and ¾) were inherited from ISO-8859-1, which was incorporated whole as the Latin-1 Supplement block. List of characters Block History The following Unicode-related documents record the purpose and process of defining specific characters in the Number Forms block: See also Latin script in Unicode Unicode symbols References Symbols Unicode Unicode blocks
https://en.wikipedia.org/wiki/DNA%20repair-deficiency%20disorder
A DNA repair-deficiency disorder is a medical condition due to reduced functionality of DNA repair. DNA repair defects can cause an accelerated aging disease or an increased risk of cancer, or sometimes both. DNA repair defects and accelerated aging DNA repair defects are seen in nearly all of the diseases described as accelerated aging disease, in which various tissues, organs or systems of the human body age prematurely. Because the accelerated aging diseases display different aspects of aging, but never every aspect, they are often called segmental progerias by biogerontologists. Human disorders with accelerated aging Ataxia-telangiectasia Bloom syndrome Cockayne syndrome Fanconi anemia Progeria (Hutchinson–Gilford progeria syndrome) Rothmund–Thomson syndrome Trichothiodystrophy Werner syndrome Xeroderma pigmentosum Examples Some examples of DNA repair defects causing progeroid syndromes in humans or mice are shown in Table 1. DNA repair defects distinguished from "accelerated aging" Most of the DNA repair deficiency diseases show varying degrees of "accelerated aging" or cancer (often some of both). But elimination of any gene essential for base excision repair kills the embryo—it is too lethal to display symptoms (much less symptoms of cancer or "accelerated aging"). Rothmund-Thomson syndrome and xeroderma pigmentosum display symptoms dominated by vulnerability to cancer, whereas progeria and Werner syndrome show the most features of "accelerated aging". Hereditary nonpolyposis colorectal cancer (HNPCC) is very often caused by a defective MSH2 gene leading to defective mismatch repair, but displays no symptoms of "accelerated aging". On the other hand, Cockayne Syndrome and trichothiodystrophy show mainly features of accelerated aging, but apparently without an increased risk of cancer Some DNA repair defects manifest as neurodegeneration rather than as cancer or "accelerated aging". (Also see the "DNA damage theory of aging" for a discussion of the e
https://en.wikipedia.org/wiki/Sense%20%28molecular%20biology%29
In molecular biology and genetics, the sense of a nucleic acid molecule, particularly of a strand of DNA or RNA, refers to the nature of the roles of the strand and its complement in specifying a sequence of amino acids. Depending on the context, sense may have slightly different meanings. For example, the negative-sense strand of DNA is equivalent to the template strand, whereas the positive-sense strand is the non-template strand whose nucleotide sequence is equivalent to the sequence of the mRNA transcript. DNA sense Because of the complementary nature of base-pairing between nucleic acid polymers, a double-stranded DNA molecule will be composed of two strands with sequences that are reverse complements of each other. To help molecular biologists specifically identify each strand individually, the two strands are usually differentiated as the "sense" strand and the "antisense" strand. An individual strand of DNA is referred to as positive-sense (also positive (+) or simply sense) if its nucleotide sequence corresponds directly to the sequence of an RNA transcript which is translated or translatable into a sequence of amino acids (provided that any thymine bases in the DNA sequence are replaced with uracil bases in the RNA sequence). The other strand of the double-stranded DNA molecule is referred to as negative-sense (also negative (−) or antisense), and is reverse complementary to both the positive-sense strand and the RNA transcript. It is actually the antisense strand that is used as the template from which RNA polymerases construct the RNA transcript, but the complementary base-pairing by which nucleic acid polymerization occurs means that the sequence of the RNA transcript will look identical to the positive-sense strand, apart from the RNA transcript's use of uracil instead of thymine. Sometimes the phrases coding strand and template strand are encountered in place of sense and antisense, respectively, and in the context of a double-stranded DNA molecule
https://en.wikipedia.org/wiki/List%20of%20MeSH%20codes%20%28D12.776%29
The following is a partial list of the "D" codes for Medical Subject Headings (MeSH), as defined by the United States National Library of Medicine (NLM). This list continues the information at List of MeSH codes (D12.644). Codes following these are found at List of MeSH codes (D13). For other MeSH codes, see List of MeSH codes. The source for this content is the set of 2006 MeSH Trees from the NLM. – proteins – albumins – c-reactive protein – conalbumin – lactalbumin – ovalbumin – avidin – parvalbumins – ricin – serum albumin – methemalbumin – prealbumin – serum albumin, bovine – serum albumin, radio-iodinated – technetium tc 99m aggregated albumin – algal proteins – amphibian proteins – xenopus proteins – amyloid – amyloid beta-protein – amyloid beta-protein precursor – serum amyloid a protein – serum amyloid p-component – antifreeze proteins – antifreeze proteins, type i – antifreeze proteins, type ii – antifreeze proteins, type iii – antifreeze proteins, type iv – apoproteins – apoenzymes – apolipoproteins – apolipoprotein A – apolipoprotein A1 – apolipoprotein A2 – apolipoprotein B – apolipoprotein C – apolipoprotein E – aprotinin – archaeal proteins – bacteriorhodopsins – dna topoisomerases, type i, archaeal – halorhodopsins – periplasmic proteins – armadillo domain proteins – beta-catenin – gamma catenin – plakophilins – avian proteins – bacterial proteins See List of MeSH codes (D12.776.097). – blood proteins See List of MeSH codes (D12.776.124). – carrier proteins See List of MeSH codes (D12.776.157). – cell cycle proteins – cdc25 phosphatase – cellular apoptosis susceptibility protein – cullin proteins – cyclin-dependent kinase inhibitor proteins – cyclin-dependent kinase inhibitor p15 – cyclin-dependent kinase inhibitor p16 – cyclin-dependent kinase inhibitor p18 – cyclin-dependent kinase inhibitor p19 – cyclin-dependent kinase inhibitor p21 – cyclin-dependent kinase inhibitor p27 – cyclin-de
https://en.wikipedia.org/wiki/Recurrent%20point
In mathematics, a recurrent point for a function f is a point that is in its own limit set by f. Any neighborhood containing the recurrent point will also contain (a countable number of) iterates of it as well. Definition Let be a Hausdorff space and a function. A point is said to be recurrent (for ) if , i.e. if belongs to its -limit set. This means that for each neighborhood of there exists such that . The set of recurrent points of is often denoted and is called the recurrent set of . Its closure is called the Birkhoff center of , and appears in the work of George David Birkhoff on dynamical systems. Every recurrent point is a nonwandering point, hence if is a homeomorphism and is compact, then is an invariant subset of the non-wandering set of (and may be a proper subset). References Limit sets
https://en.wikipedia.org/wiki/Topological%20conjugacy
In mathematics, two functions are said to be topologically conjugate if there exists a homeomorphism that will conjugate the one into the other. Topological conjugacy, and related-but-distinct of flows, are important in the study of iterated functions and more generally dynamical systems, since, if the dynamics of one iterative function can be determined, then that for a topologically conjugate function follows trivially. To illustrate this directly: suppose that and are iterated functions, and there exists a homeomorphism such that so that and are topologically conjugate. Then one must have and so the iterated systems are topologically conjugate as well. Here, denotes function composition. Definition , and are continuous functions on topological spaces, and . being topologically semiconjugate to means, by definition, that is a surjection such that . and being topologically conjugate means, by definition, that they are topologically semiconjugate and is furthermore injective, then bijective, and its inverse is continuous too; i.e. is a homeomorphism; further, is termed a topological conjugation between and . Flows Similarly, on , and on are flows, with , and as above. being topologically semiconjugate to means, by definition, that is a surjection such that , for each , . and being topologically conjugate means, by definition, that they are topologically semiconjugate and is a homeomorphism. Examples The logistic map and the tent map are topologically conjugate. The logistic map of unit height and the Bernoulli map are topologically conjugate. For certain values in the parameter space, the Hénon map when restricted to its Julia set is topologically conjugate or semi-conjugate to the shift map on the space of two-sided sequences in two symbols. Discussion Topological conjugation – unlike semiconjugation – defines an equivalence relation in the space of all continuous surjections of a topological space to itself, by declari